clock menu more-arrow no yes mobile

Filed under:

AI leaders (and Elon Musk) urge all labs to press pause on powerful AI

We got GPT-4. We could stop there for now, placing a moratorium on new AI systems more powerful than that.

Elon Musk sits onstage holding a microphone.
Elon Musk speaks onstage during the the World Artificial Intelligence Conference in Shanghai on August 29, 2019.
Hector Retamal/AFP via Getty Images
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Some of the biggest names in AI are raising the alarm about their own creations. In an open letter published Tuesday, more than 1,100 signatories called for a moratorium on state-of-the-art AI development.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” reads the letter, released by the Future of Life Institute, a nonprofit that works to reduce catastrophic and existential risks. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

These are powerful words from powerful people. Signatories include Elon Musk, who helped co-found GPT-4 maker OpenAI before breaking with the company in 2018, along with Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn.

More to the point, the signatories include foundational figures in artificial intelligence, including Yoshua Bengio, who pioneered the AI approach known as deep learning; Stuart Russell, a leading researcher at UC Berkeley’s Center for Human-Compatible AI; and Victoria Krakovna, a research scientist at DeepMind.

These are people who know AI. And they’re warning that society is not ready for the increasingly advanced systems that labs are racing to deploy.

There’s an understandable impulse here to eye-roll. After all, the signatories include some of the very people who are pushing out the generative AI models that the letter warns about. People like Emad Mostaque, the CEO of Stability AI, which released the text-to-image model Stable Diffusion last year.

But given the high stakes around rapid AI development, we have two options. Option one is to object, “These are the people who got us into this mess!” Option two is to object, “These are the people who got us into this mess!” — and then put pressure on them to do everything we can to stop the mess from spiraling out of control.

The letter is right to argue that there’s still a lot we can do.

We can — and should — slow down AI progress

Some people assume that we can’t slow down technological progress. Or that even if we can, we shouldn’t, because AI can bring the world so many benefits.

Both those assumptions start to fall apart when you think about them.

As I wrote in my piece laying out the case for slowing down AI, there is no technological inevitability, no law of nature, declaring that we must get GPT-5 next year and GPT-6 the year after. Which types of AI we choose to build or not build, how fast or how slow we choose to go — these are decisions that are up to us humans to make.

Although it might seem like an AI race is inevitable because of the profit and prestige incentives in the industry — and because of the geopolitical competition — all that really means is that the true challenge is to change the underlying incentive structure that drives all actors.

The open letter echoes this point. We need a moratorium on powerful AI, it says, so we have a chance to ask ourselves:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

In other words: We don’t have to build robots that will steal our jobs and maybe kill us.

Slowing down a new technology is not some radical idea, destined for futility. Humanity has done this before — even with economically valuable technologies. Just think of human cloning or human germline modification. The recombinant DNA researchers behind the Asilomar Conference of 1975 famously organized a moratorium on certain experiments. Scientists definitely can modify the human germline, and they probably could engage in cloning. But with rare exceptions like the Chinese scientist He Jiankui — who was sentenced to three years in prison for his work on modifying human embryos — they don’t.

What about the other assumption — that we shouldn’t slow down AI because it can bring the world so many benefits?

The key point here is that we’ve got to strike a wise balance between potential benefits and potential risks. It doesn’t make sense to barrel ahead with developing ever-more-powerful AI without at least some measure of confidence that the risks will be manageable. And those risks aren’t just about whether advanced AI could one day pose an existential threat to humanity, but about whether they’ll change the world in ways many of us would reject. The more power the machinery has to disrupt life, the more confident we’d better be that we can handle the disruptions and think they’re worthwhile.

Exactly what we would do with a six-month pause is less clear. Congress, and the federal government more broadly, lacks deep expertise in artificial intelligence, and the unprecedented pace and power of AI makes developing standards to control it that much more difficult. But if anything, this uncertainty bolsters the case for taking a breath.

Again, this is not a radical position. Sam Altman, OpenAI’s CEO, has said as much. He recently told ABC News that he’s “a little bit scared” of the tech his company is creating, including how quickly it may replace some jobs.

“I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts,” Altman said. “But if this happens in a single-digit number of years, some of these shifts ... That is the part I worry about the most.”

In fact, OpenAI said in a recent statement that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The tech heavyweights who signed the open letter agree. That point, they say, is now.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.