AI, extinction, nuclear war, pandemics ... That's expert open letter bingo

'We need to prepare now'

There's another doomsaying open letter about AI making the rounds. This time a wide swath of tech leaders, ML luminaries, and even a few celebrities have signed on to urge the world to take the alleged extinction-level threats posed by artificial intelligence more seriously. 

More aptly a statement, the message from the Center for AI Safety (CAIS) signed by individuals like AI pioneer Geoffrey Hinton, OpenAI CEO Sam Altman, encryption guru Martin Hellman, Microsoft CTO Kevin Scott, and others is a single declarative sentence predicting apocalypse if it goes unheeded:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." 

Why so brief? The goal was "to demonstrate the broad and growing coalition of AI scientists, tech leaders, and professors that are concerned by AI extinction risks. We need widespread acknowledgement of the stakes so we can have useful policy discussions," CAIS director Dan Hendrycks told The Register.

CAIS makes no mention of artificial general intelligence (AGI) in its list of AI risks, we note. And current-generation models, such as ChatGPT, are not an apocalyptic threat to humanity, Hendrycks told us. The warning this week is about what may come next.

"The types of catastrophic threats this statement refers to are associated with future advanced AI systems," Hendrycks opined. He added that necessary advancements needed to reach the level of "apocalyptic threat" may be as little as two to 10 years away, not several decades. "We need to prepare now. However, AI systems that could cause catastrophic outcomes do not need to be AGIs," he said. 

Because humans are perfectly peaceful anyway

One such threat is weaponization, or the idea that someone could repurpose benevolent AI to be highly destructive, such as using a drug discovery bot to develop chemical or biological weapons, or using reinforcement learning for machine-based combat. That said, Humans are already quite capable of manufacturing those sorts of weapons, which can take out a person, neighborhood, city, or country.

AI could also be trained to pursue its goals without regard for individual or societal values, we're warned. It could "enfeeble" humans who end up ceding skills and abilities to automated machines, causing a power imbalance between AI's controllers and those displaced by automation, or be used to spread disinformation, intentionally or otherwise.

Again, none of the AI involved in that needs to be general, and it's not too much of a stretch to see the potential for current-generation AI to evolve to pose the sorts of risks CAIS is worried about. You may have you own opinions on how truly destructive or capable the software could or will be, and what it could actually achieve.

It's crucial, CAIS' argument goes, to examine and address the negative impacts of AI that are already being felt, and to turn those extant impacts into foresight. "As we grapple with immediate AI risks … the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence," Hendrycks said in a statement.

"The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems," Hendrycks urged, with a list of corporate, academic and thought leaders backing him up. 

Musk's not on board

Other signatories include Google DeepMind principal scientist Ian Goodfellow, philosophers David Chalmers and Daniel Dennett, author and blogger Sam Harris and musician/Elon Musk's ex, Grimes. Speaking of the man himself, Musk's signature is absent. 

The Twitter CEO was among those who signed an open letter published by the Future of Life Institute this past March urging a six month pause on the training of AI systems "more powerful than GPT-4." Unsurprisingly, OpenAI CEO Altman's signature was absent from that particular letter, ostensibly because it called his company out directly. 

OpenAI has since issued its own warnings about the threats posed by advanced AI and called for the establishment of a global watchdog akin to the International Atomic Energy Agency to regulate use of AI.

That warning and regulatory call, in a case of historically poor timing, came the same day Altman threatened to pull OpenAI, and ChatGPT with it, from the EU over the bloc's AI Act. The rules he supports are one thing, but Altman told Brussels their idea of AI restriction was a regulatory bridge too far, thank you very much. 

EU parliamentarians responded by saying they wouldn't be dictated to by OpenAI, and that if the company can't comply with basic governance and transparency rules, "their systems aren't fit for the European market," asserted Dutch MEP Kim van Sparrentak. 

We've asked OpenAI for clarification on Altman's position(s) and will update this story if we hear back. ®

More about

TIP US OFF

Send us news


Other stories you might like