This article is more than 1 year old

Proliferation of AI weapons among non-state actors 'could be impossible to stop'

Governments also have no theory on how nefarious groups might behave using the tech

The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament.

The House of Lords' AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military context made them difficult to contain and keep out of nefarious hands.

When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology

Speaking to the committee, James Black, assistant director of defense and security research group RAND Europe, said: "A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it's missiles, it's engines, it's nuclear materials."

An added uncertainty was that there was no established "war game" theory of how hostile non-state actors might behave using AI-based weapons. A further uncertainty we'd like to add is that today's artificial intelligence isn't particularly reliable, a point we hope isn't lost on anyone.

RAND began in 1945 as a US government unit under a special contract to the Douglas Aircraft Company, but soon developed into an independent, non-profit corporation aimed at improving policy making with scientific rigor. It was one of the leaders in applying game theory to Cold War nuclear weapons proliferation.

Black said: "On the question about escalation: in general, we don't have particularly good theory for understanding how to deter non-state actors. A lot of the deterrence theory [has] evolved out of Cold War nuclear deterrence in the USSR, USA and the West. It is not really configured the same way to think about non-state actors, particularly those which have very decentralized, loose non-hierarchical network command structures, which don't lend themselves to influencing in the same way as a traditional top-down military adversary."

The situation with AI-enhanced weapons was different from earlier military analysis in that the private sector is way ahead of government research, which was not the case with physical threats, he said.

"The locus of innovation within defense, specifically in this area, has shifted away from the public sector [to] the private. When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology. This is not like early computing or the jet engine, which was very much coming out of government-funded labs. This is something where the private sector is leading the way and is therefore shaping the kind of debate about the type of governance as well as the actual capabilities that are available."

Even if governments were able to put in place legislation designed to curb the proliferation of AI-enhanced weapons, there would be a huge incentive to cheat, Kenneth Payne, professor of strategy at King's College London, told the committee.

Military IT tech in fatigues in server room

'Slow AI' needed to stop autonomous weapons making humans worse

READ MORE

"If you're having arms control, you need to have some process of validation," he said. "But the signature for developing AI is quite small. You don't need these uranium enrichment facilities, for example. You're talking about warehouses with computers and scientists. How can you monitor potential defection from the arms control regime? There's a huge incentive to cheat on regulation if you believe that these technologies confer profound military advantage. That makes me slightly uncomfortable… I'm slightly skeptical about the prospects for regulation."

Last month, hundreds of computer scientists, tech industry leaders, and AI experts signed an open letter calling for a pause for at least six months in the training of AI systems more powerful than GPT-4. Signatories included Apple co-founder Steve Wozniak, SpaceX CEO Elon Musk, and IEEE computing pioneer Grady Booch.

But the prospect of a pause was wholly unrealistic, Payne said. "It reflects the degree of societal unease about the rapid pace of change that people feel is coming down the tracks towards them. But I don't think it is a realistic proposition." ®

More about

TIP US OFF

Send us news


Other stories you might like