AI weapons need a safe back door for human control

China spends twice as much on AI as everyone else put together: expert witness

Policymakers should insist manufacturers include controls in artificial intelligence-based weapons systems that allow them to be turned off if they get out of control, experts told the UK Parliament.

Speaking to the House of Lords AI in Weapons Systems Committee, Lord Sedwill, former security advisor and senior civil servant, said the "challenge" politicians should put to the tech industry is whether they can guarantee AI-enhanced weapons systems can be entirely supervised by humans.

"We have to put over to the people who are developing [weapons systems]: is there a way of essentially ring-fencing, some code or whatever it might be, that couldn't be amended … that would essentially set boundaries to how an autonomous system learn and evolve and evolve on their own … something that would put boundaries to how it might operate?" he asked the Lords.

Also attending the hearing, Hugh Durrant-Whyte, director of the Centre for Translational Data Science at the University of Sydney, agreed. "There is an opportunity – and there are groups working on this — of … formally mathematically proving that an algorithm does only this and no more and providing an absolute guarantee, rather than testing it until it breaks. And that is in the future. The only way that we will get these things right. But it is only an emerging discipline," he said.

Elsewhere in the session, Durrant-Whyte warned China was investing heavily in weapons AI, but had yet to take the lead.

"China is investing arguably twice as much as everyone else put together. We need to recognize that it genuinely has gone to town. If you look at the payments, if you look at the number of publications, if you look at the companies that are involved, it is quite significant. And yet, it's important to point out that the US is still dominant in this area. It's got more innovation. It's got fantastic companies … and the UK, though, does really punch above its weight in a lot of different areas in AI," he said.

Earlier in the enquiry, James Black, assistant director of defense and security research group RAND Europe, warned that non-state actors could lead in the proliferation of AI-enhanced weapons systems.

"A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it's missiles, it's engines, it's nuclear materials," he warned. ®

More about

TIP US OFF

Send us news


Other stories you might like