Exclusive: Sam Altman says ChatGPT will have to evolve in “uncomfortable” ways
Published Date: 1/17/2024
Source: axios.com

OpenAI's next big model "will be able to do a lot, lot more" than the existing models can, CEO Sam Altman told Axios in an exclusive interview at Davos on Wednesday.

Why it matters: Altman told Axios' Ina Fried that AI is evolving much more rapidly than previous technologies that took Silicon Valley by storm. But he also conceded that the evolution and proliferation of OpenAI's technology will require "uncomfortable" decisions.


  • Altman believes future AI products will need to allow "quite a lot of individual customization" and "that's going to make a lot of people uncomfortable," because AI will give different answers for different users, based on their values preferences and possibly on what country they reside in.
  • "If the country said, you know, all gay people should be killed on sight, then no ... that is well out of bounds," Altman tells Axios. "But there are probably other things that I don't personally agree with, but a different culture might. ... We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools."
  • Asked if future versions of OpenAI products might answer a question differently in different countries based on that country's values, Altman said, "It'll be different for users with different values. The countries issue I think, is somewhat less important."

What's coming: We are headed toward a new way of doing knowledge work, Altman said, speaking at Axios House on the sidelines of the World Economic Forum.

  • Soon, "you might just be able to say 'what are my most important emails today,'" and have AI summarize them.
  • Altman says AI advances will "help vastly accelerate the rate of scientific discovery." He doesn't expect that to happen in 2024, "but when it happens, it's a big, big deal."
  • Altman said his top priority right now is launching the new model, likely to be called GPT-5.

Altman admitted he's "nervous" about AI's impact on elections around the world this year, but was defensive about OpenAI's investments in that area.

  • Altman said he wanted to avoid "fighting the last war" on election misinformation.
  • In recent weeks, OpenAI has announced it would ramp up efforts to reduce misinformation and abuse of its models related to more than 60 elections taking place around the world in 2024.
  • He didn't specify how many OpenAI staff would work on election troubleshooting, but he rejected the idea that simply having a large election team would help solve election problems. OpenAI has far fewer people devoted to election security than companies like Meta and TikTok.

Flashback: Altman was ousted as CEO last November before being swiftly reinstated. The tensions with the board had been driven by an internal debate over growth vs. guardrails on the company's powerful technology.

The intrigue: Altman said there's no update on whether his close associate and OpenAI co-founder Ilya Sutskever is returning to the company in a senior role after he resigned in the wake of the board debacle.

  • Surprisingly, Altman admitted that he "isn't sure on the exact status" of Sutskever's employment.
  • Altman's interests and investments extend well beyond OpenAI — from nuclear fusion to chip-making — leaving many to wonder if he is paying enough attention to overseeing a technology he says could destroy humanity.
  • Altman said "OpenAI is what I am doing" and that it was a "misrepresentation" to say he is engaged in projects that don't support OpenAI. He said he continues to support startups he was funding prior to joining.

Driving the news: Altman defended content licensing deals signed by OpenAI with major publishers including AP and Axel Springer, and he took a swipe at the NY Times, which is suing OpenAI for copyright infringement.

  • Altman said OpenAI doesn't need NYT content to build successful AI models, but dodged when asked if he would oversee the creation of a model based only on licensed and truly public domain content: "I wish I had an easy yes or no answer," he said.
  • "We can respect an opt-out" from companies like the NYT he said, "but NYT content has been copied and not attributed all over the web" and OpenAI can't avoid training on that, he said.
  • Altman said OpenAI decided to allow military use of its models out of a desire to support the U.S. government, but conceded "there will be a lot of things that we have to start slowly on."

What they're saying: Altman's advice for CEOs stuck figuring out the best use of AI for their company is: "How can I make my internal workflow more efficient?"

  • Altman's wisdom after his 2023 experience of being fired and rehired as CEO: "Don't let important but not urgent problems fester."
  • Asked what he learned in 2023 that he would take into 2024, Altman joked: "I learned something about board members."