The OpenAI saga's outcome leaves billionaires to shape AI
Published Date: 3/11/2024
Source: axios.com

Friday's announcement that OpenAI CEO Sam Altman will return to the nonprofit's board locks Silicon Valley's billionaire class into control of the destiny of society-transforming artificial intelligence.

Why it matters: AI will be shaped by rich men and the markets that made them rich, not by the scientists and engineers who are building it or the governments that will have to deal with its impact.


Catch up quick: OpenAI's board fired Altman in a crisis that shook the AI world last November — but within a few tumultuous days, Altman was back in charge after most of the company threatened to quit.

  • At the time, the board members who ousted Altman said they'd lost trust in him. But they never explained why or how.

The latest: Lawyers who conducted an outside investigation into the board fight found no malfeasance, financial impropriety or product safety-related disagreement behind the firing, OpenAI said Friday.

  • Apparently, it really was all about a breakdown in trust.

But a breakdown in trust between a board and a CEO is a big deal, and we still have little idea what happened in November to the company responsible for ChatGPT.

  • OpenAI isn't releasing the full investigation report, only a brief summary. The board members who fired Altman have never shared their story in full detail.

The intrigue: The New York Times reported last week that OpenAI CTO Mira Murati played a "pivotal role" in Altman's firing.

  • In a post on X, Murati, who served very briefly as OpenAI's interim CEO before backing Altman's return, described the NYT story as "the previous board's efforts to scapegoat me with anonymous and misleading claims."
  • Murati, in a memo to staff that she also posted on X, said she'd given Altman critical feedback directly, then shared the feedback with board members when they asked her about it.

The upshot now is that OpenAI is steaming forward at warp speed with Altman's original strategy — funding AI development by selling shares in a for-profit subsidiary to Microsoft and other investors.

  • The old board's bungled coup is almost certainly the last time anyone will be in a position to challenge Altman's leadership of OpenAI — or his belief that a nonprofit can fulfill a mission of benefiting humanity by behaving like a for-profit startup pursuing hyper-growth.
  • The new board is unlikely to block a strategy that the firm's former board got canned for questioning. And its roster, even with the addition of three accomplished female members announced Friday, no longer includes specialists in AI ethics.

The big picture: Altman became a billionaire himself as a startup investor and leader of Silicon Valley's marquee startup incubator, Y Combinator.

  • His world venerates the startup as a kind of artistic canvas for entrepreneurs and as capitalism's tool for world change.
  • That's why even the ostensibly humanitarian projects Altman has pursued — like Worldcoin, which is deploying a crypto token and global identity system — look and feel more like startups than philanthropies.

Elon Musk, one of OpenAI's cofounders, sued OpenAI and Altman last week, alleging the company has abandoned its original mission.

  • Musk's complaint claims that OpenAI no longer prioritizes serving humanity over "maximizing profits for Microsoft."

Reality check: Musk is the on-again, off-again richest man in the world who is funding his own AI company and who seems to have agreed with Altman's raise-big, go-big strategy (based on emails OpenAI posted in response to the suit).

Yes, but: There's some common-sense truth, if not necessarily legal merit or practical value, in Musk's message.

  • OpenAI's Rube Goldberg-like structure looks even less reliable after last November's crisis.
  • Efforts to "strengthen governance" at the company could rid the firm of its remaining nonprofit trappings, leaving it even more like a standard-order tech corporation.
  • OpenAI has said that its new board will consider broader changes to the firm's structure.

Friction point: There are still plenty of people in the AI field today who believe the technology carries a risk of destroying humanity.

  • Plenty of others dismiss that "existential risk" — but believe AI is likely to replicate humankind's worst biases and flaws unless it's built with caution and care.

What's next: Two examples of potentially planet-wrecking technologies from the past century lay out alternative paths for AI.

The Manhattan Project brought the U.S. government and research scientists together to build nuclear weapons during wartime.

  • The destruction of Hiroshima and Nagasaki was a tragedy the U.S. still hasn't come to terms with — but we can say with certainty that the planet has not been destroyed by nukes, at least not yet.

Climate change shows us the other path — what happens when industry controls the fate of a key technology that could leave the earth uninhabitable.

  • At the end of the 19th century, the rise of the oil and gas industries — and the start of the profligate burning of fossil fuels that now warm our atmosphere — took place at a moment very similar to ours.
  • Like now, the U.S. government then chose a mostly hands-off approach — and a generation of unfathomably rich "robber barons" shaped a new century.
  • Their legacy is a slow-burn planetary disaster that we have yet to reverse.

The bottom line: Markets and tycoons are good at moving fast, breaking things and generating wealth. But humankind seems to manage technological danger better when the government and scientists hold the tiller.