AI's next big fight: Whose values should it hold?
Published Date: 1/24/2024
Source: axios.com

There's no such thing as an AI system without values — and that means this newest technology platform must navigate partisan rifts, culture-war chasms and international tensions from the very beginning.

Why it matters: Every step in training, tuning and deploying AI models forces its creators to make choices about whose values the system will respect, whose point of view it will present and what limits it will observe.


The big picture: The creators of previous dominant tech platforms, such as the PC and the smartphone, had to wade into controversies over map borders or app store rules. But those issues lay at the edges rather than at the center of the systems themselves.

  • "This is the first time a technology platform comes embedded with values and biases," one AI pioneer, who asked not to be identified, told Axios at last week's World Economic Forum in Davos. "That's something countries are beginning to notice."

How it works: AI systems' points of view begins in the data with which they are trained — and the efforts their developers may take to mitigate the biases in the data.

  • From there, most systems undergo an "alignment" effort, in which developers try to make the AI "safer" by rating its answers as more or less desirable.

Yes, but: AI's makers routinely talk about "alignment with human values" without acknowledging how deeply contested all human values are.

  • In the U.S., for instance, you can say your chatbot AI is trained to "respect human life," but then you have to decide how it handles conversations about abortion.
  • You can say that it's on the side of human rights and democracy, but somehow it's going to have to figure out what to say about Donald Trump's claim that the 2020 election was stolen.
  • As many AI makers struggle to prevent their systems from showing racist, antisemitic or anti-LGBTQ tendencies, they face complaints that they're "too woke." The Grok system from Elon Musk's X.ai is explicitly designed as an "anti-woke" alternative.

Globally, things get even trickier. Some of the biggest differences are between the U.S. and China — but for geopolitical reasons, U.S.-developed systems are likely to be inaccessible in China and vice versa.

  • The real global battle will be in the rest of the world, where both Chinese and U.S. systems will likely be competing head-on.
  • AI makers face pressure from governments to adapt their systems to reflect different sensibilities, for example, around issues such as women's and LGBTQ rights.
  • Other regimes may put pressure on AI system developers to limit criticism of the government and discussion of political rivals.
  • How generative AI systems present controversial current events, like Israel's current war in Gaza, will become another point of tension.

What they're saying: In conversations with AI leaders at Davos, some saw the values problem as a top concern, while others downplayed the risks.

  • Asked about the issue in an on-stage interview at Axios House, OpenAI CEO Sam Altman said that only in extreme cases would the company decide not to enter a market.
  • "If the country said, you know, all gay people should be killed on sight, then no," he told Axios.
  • However, Altman also acknowledged the reality that a one-size-fits-all set of responses is unlikely to fly around the world and indicated that OpenAI would be open to adjusting its answers for different users or countries.
  • "We have to be somewhat uncomfortable as a tool builder here with some of the uses of our tools," he said. "There are a lot of governments who I think have different beliefs than us where you can still say, there's enough common ground here."

Be smart: Right now, in many cases, only the makers of an AI system know exactly what values they're trying to embed — and how successful they are.

  • Alexandra Reeve Givens, CEO of the Center for Democracy and Technology, said she's concerned that so many AI companies' decisions aren't visible to the public, nor is it clear how they reached those decisions and who they consulted.
  • "We do need more visibility and understanding about the ways in which they're running those processes and making those decisions," Givens said in an interview on the sidelines of the World Economic Forum.
  • While Givens acknowledged it's unlikely that one set of values will be acceptable around the world, she said transparency is especially important when companies are operating in countries known to violate human rights.

Givens suggested that AI companies need something like the Santa Clara Principles, developed a decade ago as a standard for best practices, transparency and accountability in content moderation decisions.