Site last updated: Friday, November 8, 2024

Log In

Reset Password
MENU
Butler County's great daily newspaper

Tyler Cowen: New laws to regulate AI would be premature

FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. Anthropic, ChatGPT-maker OpenAI and other major developers of AI systems known as large language models say they're hard at work to make them more truthful. (AP Photo/Richard Drew, File)

All of a sudden there is a flurry of activity around artificial intelligence policy. On Oct. 30, President Joe Biden issued an executive order on the topic. An AI safety summit is being held in the UK later this week. And last week, the U.S. Senate held a closed-door forum on research and development in AI.

I spoke at the Senate forum, convened by Majority Leader Chuck Schumer. Here’s an outline of what I told the panel about how the U.S. can boost progress in AI and improve its national security.

First, the U.S. should allow in many more high-skilled foreign citizens, most of all those who work in AI and related fields. As you might expect, many of the key contributors to AI progress — such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) — come from abroad. Perhaps the U.S. will never be able to compete with China when it comes to assembling raw computing power, but many of the world’s best and brightest would prefer to live in America. The government should make their path as easy as possible.

Artificial intelligence also means that science probably is going to move faster in the future. That applies not only to AI itself, but also to the sciences and practices that will benefit, such as computational biology and green energy. The U.S. cannot afford the luxury of its current slow procurement and funding cycles. Biomedical science funding should be more like the nimble National Science Foundation and less like the bureaucratic National Institutes of Health. Better yet, DARPA (Defense Advance Research Projects Agency) models could be applied more broadly to give program managers greater authority to take risks with their grants.

Those changes would make it more likely that new and forthcoming AI tools will translate into better outcomes for ordinary Americans.

The U.S. should also speed up permitting reform. Construction of more and better semiconductor plants is a priority, both for national security and for AI progress more generally, as recognized by the CHIPS Act. Yet the need for multiple layers of permits and environmental review slows down this process and raises costs. There is a general recognition that permitting reform is needed, but it hasn’t happened.

As the rate of scientific progress increases, regulation may need to adapt. Many critics have charged that FDA approval processes are too slow and conservative. That problem could become much worse if the number of new candidate drugs were to increase by two or three times. It is unrealistic to expect the government to become as fast as the AIs, but it can certainly be faster than it is now.

What about the need for more regulation?

In the short run, the U.S. can beef up, reform and reconsider what is sometimes called “modular regulation.” If an AI were to issue health or diagnostic advice, for example, it would be covered by current regulatory bodies — federal, state and local. At all levels, those institutions need to make significant changes. Sometimes that will involve more regulation and sometimes less, but now is the time to start those reappraisals.

What if an AI gives diagnostic advice that is better than that of human doctors — but is still not perfect? Should the AI company be subject to medical malpractice law? I would prefer a “user beware” approach, as currently exists for googling medical advice. But obviously this issue requires deeper consideration. The same concern applies to AI legal advice: Plenty of current laws apply, but they need to be revised to match new technologies.

The U.S. should not, at the moment, regulate or license AI services as entities unto themselves. Obviously current AI services fall under extant laws, including laws against violence and fraud.

Over time, I am confident that people will figure out what exactly AIs, including large language models, are best used for. Industry structure may become relatively stable, and risks will be better known. It will be clear whether the American AI service providers have kept their leads over China’s.

At that point — but not until then — the U.S. might consider more general regulations for AI. Market experimentation has the highest return now, when we are debating the best and most appropriate use cases for AI. It is unrealistic to expect bureaucrats, few of whom have any AI expertise, to figure out answers to these questions.

In the meantime, it does not work to work license AIs on the condition that they prove they will not cause any harm, or are very unlikely to. The technology is very general, its future uses are hard to predict and some harms could be the fault of the users, not the company behind the service. In similar fashion, it would not have been wise to make similar demands of the printing press, or of automation, in their early days. And licensing regimes have an unfortunate tendency to devolve into bureaucratic or political squabbling.

In any case: The time to act is now. The U.S. needs to get on with it.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Tyler Cowen is a Bloomberg Opinion columnist, a professor of economics at George Mason University and host of the Marginal Revolution blog.

More in Other Voices

Subscribe to our Daily Newsletter

* indicates required
TODAY'S PHOTOS