Behind the Curtain: What AI's top leaders think about its dangerous potential
Published Date: 2/27/2024
Source: axios.com

Not since the atomic bomb has so much money been spent in so little time on a technology its own creators admit could ... wipe out our entire species.

Why it matters: Most people ignore or dismiss the sentence above because it seems too ludicrous to be true. But as technological savants crank out new large-language-model wonders, it's worth pausing to hear their own warnings.


The big picture: Sometimes our world changes so fast, in so many head-spinning ways, that it's impossible to fully capture the wildness — and weirdness — of it all.

  • It's easy to think it's all hype when Google's AI tool, Gemini, portrays a white American founding father as Black, or when an early ChatGPT model urged a reporter to leave his wife.
  • But even skeptics believe that when some of the biggest companies in the world pour this much coin into one tech, they're sure to will it into something very powerful.

Let's set aside for one column whether generative AI will save or destroy humanity and focus on the actual words from actual creators of it:

  • Dario Amodei, who has raised $7.3 billion for his AI start-up Anthropic after leaving OpenAI over concerns over ethics, says there's a 10% to 25% chance that AI technology could destroy humanity. But if that doesn't happen, he said, "it'll go not just fine, it'll go really, really great."
  • Fei-Fei Li, a renowned AI scholar who is co-director of Stanford's Human-Centered AI Institute, told MIT Technology Review last year that AI's "catastrophic risks to society" include practical, "rubber meets the road" problems — misinformation, workforce disruption, bias and privacy infringements.
  • Geoffrey Hinton — known as "The Godfather of AI," and for 10 years one of Google's AI leaders — warns anyone who'll listen that we're creating a technology that in our lifetimes could control and obliterate us.

OK, maybe that's nuts. But Ilya Sutskever, a pioneer scientist at OpenAI, has warned the exact same thing. He became fearful that the technology could wipe out humanity," The New York Times reported nonchalantly.

  • OpenAI CEO Sam Altman takes the stance that AI will probably be great — but still warns we must be careful we don't destroy humanity. Altman, along with the three men above, was among the signatories to this chilling one-sentence statement last year: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

With equal certainty to what he calls the "doomers," Marc Andreessen, one of America's brightest tech investors, argues that a techno-utopia awaits us.

  • He expounds "effective accelerationism" — e/acc (pronounced ee-ACK), a play on Sam Bankman-Fried's "effective altruism" — pushing to move faster and with no limits to bring this AI tech to life and spread it everywhere. "The techno-capital machine works for us. All the machines work for us," he writes in his "Techno-Optimist Manifesto."

This debate often unfolds on X, where Elon Musk — who has warned that humans one day may need to merge with machines — is racing to unleash his own AI model, Grok, with a mode for machines to respond with mischievous humor.

Between the lines: Both sides of this debate share the view that AI is unimaginably gigantically powerful, whether it destroys us or saves us.

  • Probably true! But that's also a self-serving view that puts their work at the center of the universe, notes Scott Rosenberg, Axios managing editor for tech.
  • The history of tech is that things get overhyped — then wind up being big, but not as huge as sold.

What we're watching: OpenAI, knowing that people fear AI destroying the world, is trying to mitigate those fears by building trust that it's deploying products as safely as possible.

  • For instance, Sora — OpenAI's new video tool, which conjures cinematic-quality clips from plain-text prompts — for now is available only to select creators, researchers, and "red teamers" to assess harms and risks.
  • AI's creators know that, at least temporarily, online mayhem and mischief will be inevitable when these tools are open to all. So OpenAI is trying to tighten safeguards before releasing Sora into the wild.

We had a fascinating conversation with Srinivas Narayanan, vice president of engineering at OpenAI, who leads the teams that build products, including ChatGPT and Sora. "I'm going to be very humble and say I just don't know," Narayanan said about the doomer view. "I think it's important for us to approach this with humility.

  • So he said an "iterative deployment strategy," where the company researches and learns before fully unleashing a new product, is a key part of OpenAI values.

"We are proactive about talking about the risks of these models," Narayanan said from San Francisco. "I want us to be proactive about the harms that could happen, and do the research that is necessary in order to give us the clarity that we need."

  • "We want AI systems to be great assistive tools to us in accomplishing the things we want," he added. "But we as humans will have to express the values and what we want out of these systems. That's the future we all want to build."

The bottom line: "The future I want is one where humans are still guiding the AI systems, right?" Narayanan added hopefully. "Ultimately, that's what we want."