In a world of extreme risks, the world needs a chief risk officer
Published Date: 6/5/2021
Source: axios.com

A future that will see escalating danger from extreme risks demands a longer-term approach to handling these threats.

The big picture: The world was caught off guard by COVID-19, and millions of people have paid the price. But the pandemic provides an opportunity to rethink the approach to the growing threat from low-probability but high-consequence risks — including the ones we may be inadvertently causing ourselves.


Driving the news: Earlier this week, a nonprofit in the U.K. called the Centre for Long-Term Resilience put out a report that should be required reading for leaders around the world.

  • Spearheaded by Toby Ord — an existential risk scholar at the University of Oxford — "Future Proof" makes the case that "we are currently living with an unsustainably high level of extreme risk."
  • "With the continued acceleration of technology, and without serious efforts to boost our resilience to these risks, there is strong reason to believe the risks will only continue to grow," as the authors write.

Between the lines: "Future Proof" focuses on two chief areas of concern: artificial intelligence and biosecurity.

  • While the longer-term threat of artificial intelligence reaching a level of superintelligence beyond humans is an existential risk by itself, the ransomware and other cyberattacks plaguing the world could be supercharged by the use of AI tools, while the development of lethal autonomous weapons threatens to make war far more chaotic and destructive.
  • Natural pandemics are bad enough, but we're headed toward a world in which thousands of people will have access to technologies that can enhance existing viruses or synthesize entirely new ones. That's far more dangerous.

It's far from clear how the world can control these human-made extreme risks.

  • Nuclear weapons are easy by comparison — bombs are difficult to make and even harder for a nation to use without guaranteeing its own destruction, which is largely why, 75 years after Hiroshima, fewer than 10 countries have developed a nuclear arsenal.
  • But both biotech and AI are dual-use technologies, meaning they can be wielded for both beneficial and malign purposes. That makes them far more difficult to control than nuclear weapons, especially since some of the most extreme risks — like, say, a dangerous virus leaking out of a lab — could be accidental, not purposeful.
  • Even though the risks from biotech and AI are growing, there is little in the way of international agreements to manage them. The UN office charged with implementing the treaty banning bioweapons is staffed by all of three people, while efforts to establish global norms around AI research — much of which, unlike the nuclear sphere, is carried out by private firms — have been mostly unsuccessful.

What to watch: The "Future Proof" report recommends a range of actions, from focusing on the development of technologies like metagenomic sequencing that can rapidly identify new pathogens to having nations set aside a percentage of GDP for extreme risk preparation, just as NATO members are required to spend on defense.

  • A global treaty on risks to the future of humanity, modeled on earlier efforts around nuclear weapons and climate change, could at least raise the international profile of extreme risks.
  • Most importantly, the report calls for the creation of "chief risk officers" — officials empowered to examine government policy with an eye toward what could go very wrong.

The bottom line: We are entering a frightening time for humanity. Ord estimates the chance that we will experience an existential catastrophe over the next 100 years is 1 in 6, the equivalent of playing Russian roulette with our future.

  • But if our actions have put the bullet in that gun, it's also in our power to take it out.