Politicians call for ban on 'killer robots' and the curbing of AI weapons
'This is the Oppenheimer moment of our generation'
Video Austria's foreign minister on Monday likened the rise of military artificial intelligence to the existential crisis faced by the creators of the first atomic bomb, and called for a ban on "killer robots".
"This is, I believe, the Oppenheimer moment of our generation," Alexander Schallenberg said at the start of the Vienna conference entitled 'Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.'
"Autonomous weapons systems will soon fill the world's battlefields. We already see this with AI enabled drones and AI based target selection,” he said. You can watch the full session below.
Schallenberg’s comments come just weeks after the US Air Force and Defense Advanced Research Projects Agency (DARPA) detailed efforts to put AI in control of tanks and even an F-16 fighter jet. As we've previously reported, efforts to expand the autonomy of weapons platforms has been under way for years.
Schallenberg sees AI as the biggest revolution in warfare since the invention of gunpowder but feels it is far more dangerous. With the next logical step in military AI development involving removing humans from the decision-making process, he believes there's no time to waste.
"Now is the time to agree on international rules and norms to ensure human control," he said. "At least let us make sure that the most profound and far-reaching decision: who lives and who dies remains in the hands of humans and not of machines."
Schallenberg emphasized that he isn't against the use of AI, a perspective shared by many panelists at the conference, but that it's important to understand the technology's implications as a weapon of war.
Echoing this sentiment, Hasan Mahmud, Bangladesh's minister of foreign affairs, noted AI has tremendous potential to advance science and help humanity, and argued such roles should attract more efforts than automating violence.
Don't dump the human!
One of the chief concerns raised by panelists related to accountability for the use of AI in warfare, if humans are no longer involved in the decision to use violence.
"We cannot ensure compliance with international humanitarian law anymore if there is no human control over the use of these weapons," said Mirjana Spoljaric Egger, the president of the International Committee of the Red Cross, before impressing on listeners the need to act quickly.
Today even the most sophisticated AI models aren't perfect and have been shown to make mistakes and exhibit bias, Schallenberg said, highlighting Spoljaric Egger's concerns.
"At the end of the day, who's responsible if something goes wrong," he asked. "The commander? The operator? The software developer?"
- US Air Force says AI-controlled F-16 fighter jet has been dogfighting with humans
- DARPA's latest toy is a 20-foot, 12-ton tank that drives itself
- China scientists talk of powering hypersonic weapon with cheap Nvidia chip
- AI models just love escalating conflict to all-out nuclear war
The proliferation of AI weapons was another issue raised by panelists. The argument is that where technologies like nuclear weapons require immense resources and technological know-how to harness, weaponizing AI could prove far easier.
"It probably very quickly will end up in the hands of non-governmental actors or terrorist groups," Schallenberg warned.
Another concern, raised Estonian programmer Jaan Tallinn, was that AI models may evolve to the point at which they are able to accurately distinguish human beings based on their ethnicity.
"When autonomous weapons become able to perfectly distinguish between humans, they will make it significantly easier to carry out genocides and target the killings that seek specific human characteristics," he said.
For Tallinn, AI poses an insurmountable risk to the human species if steps to control its use, particularly in military applications, are not taken. He worries that accidental errors by autonomous weapons could spark even wider wars.
But, as Eivind Van Peterson, Norway's state secretary of foreign affairs, pointed out, AI is not immune to international law, it's just that statutes weren't written with the technology in mind.
"The challenge in front of us, first and foremost, is to better establish how the existing rules apply to such weapons systems," he said.
"[With AI] we are trying to regulate the future."
As to whether the world can come together to prevent AI weapons from closing the loop, the general consensus among panelists at the event was cautious optimism.
"So, certainly, a small subset of humans are making decisions that do undermine our future species, but we're definitely capable of acting preventatively," Tallinn said, emphasizing that the human race has, despite its habit of getting drawn into arms races, done so in the past.
"We have acted preventatively on banning blinding laser weapons, not to mention constraints on biological, chemical, and nuclear weapons," he said.
As to what happens if we don't act, Schallenberg evoked the kinds of dystopian futures depicted in popular science fiction. "We all know the movies: Terminator, The Matrix, and whatever they're called. We don't want killer robots." ®