This article is more than 1 year old

Good news: AI could solve the pension crisis – by triggering a nuclear apocalypse by 2040

New US RAND report predicts a grim technological future

AI could kick start a nuclear war by 2040, according to a dossier published this month by the RAND Corporation, a US policy and defence think tank.

The technically light report describes several scenarios in which machine-learning technology tracks and sets the targets of nuclear weapons. This would involve AI gathering and presenting intelligence to military and government leaders, who make the decisions to launch weapons of mass destruction.

But there is danger in developing and deploying intelligent software that has its finger halfway on the red button. Other nations may interpret this as an escalation, ultimately leading to one launching a preemptive strike or a “doomsday machine” before it is totally destroyed first.

When computers are programmed to competently recognize threats and recommend retaliations, even as a deterrent force, the mere presence of this technology could cause the world to spiral into catastrophe – as no nuclear state wants to be annihilated first by a robot and thus must be first to launch.

In a way, a technological leap of AI could end the balance or parity essential to mutually assured destruction, leading to all out thermonuclear war. Fearing a machine is about to go crazy and order or recommend a devastating strike, a nation could jump the gun.

What's worse, a military could overestimate the capabilities of a rival nation's AI systems, or misinterpret its actions, sparking a totally unnecessary conflict. Meanwhile, a faulty AI could jump the gun itself, and somehow fool humans into launching a needless strike. Software could be tricked into thinking missiles were incoming, just like the false alarms during the Cold War. This could happen before 2040, we're warned.

"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, coauthor on the paper and an associate engineer at RAND.

"There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."


Doomsday Clock moves to 150 seconds before midnight. Thanks, Trump


The potential of an AI arms race is significant, considering the similar conditions during the Cold War, the researchers argued. Their report was built from three separate workshops held by the RAND Corporation with nuclear security experts and AI researchers last year.

“The effect of AI on nuclear strategy depends as much or more on adversaries’ perceptions of its capabilities as on what it can actually do,” the paper said.

Some believed that AI would eventually evolve to “superintelligences” with powers that could not be fully understood and controlled by humans. Others were much more skeptical and thought AI would not advance enough to be considered a threat.

Some researchers invited to the workshop said AI will not be limited by talented engineers or datasets as the field matures. Instead, it’ll boil down to hardware and the amount of computing power available.

The report doesn’t really discuss the current capabilities in AI and how it could lead to AI nuclear weapons. One of the few projects it did mention was the goal of mastering Starcraft, a strategy battle game, with bots. DeepMind, Facebook and Alibaba are all involved in Starcraft research.

The researchers believe the game “mirrors a military engagement complete with logistics, infrastructure, and a range of moves and strategies that are difficult to specify.” They did admit that Starcraft is obviously a much simpler thing to deal with than nuclear war, but by 2040, it was not unreasonable to assume AI agents will be able to play out scenarios or stages of military war games at superhuman levels.

“At present, we cannot predict which — if any — of these scenarios will come to pass, but we need to begin considering the potential impact of AI on nuclear security before these challenges become acute,” the report concluded.

So, essentially: 2040 is a long way off, and who knows what computers will be capable of by then! Gee, thanks. See you in 2040, hopefully. ®

More about

More about

More about


Send us news

Other stories you might like