'Slow AI' needed to stop autonomous weapons making humans worse

It's you, hi, you're the problem, it's you

A public evidence session about artificial intelligence in weapons systems in the UK's House of Lords last week raised concerns over a piece of the AI puzzle that is rarely addressed: the effect of the technology on humans themselves, and specifically its potential to degrade our own mental competence and decision-making.

The speakers – there to provide evidence for the Lords to mull over – discussed a lot of what you'd expect them to. For example, there was consideration given to how the world's militaries might use dozens of autonomous units to defend a specific perimeter, or to track targets over weeks or months, to prevent a situation where their own combatant, a human being trying to blend into the local population, might be compromised or killed. They also spoke of AI's potential to reduce the risk of that horrendously understated phrase "collateral damage", and the well-documented potential of the technology to misidentify targets.

But among these concerns, a very interesting thing kept popping up: the potential of AI systems to enhance our own worst qualities as human beings, and the fact that this effect is the last thing you want in the military. Things like ill-considered decision-making, misunderstandings amplified by shortened reaction times, tactical missteps and the odd blustering lie, as well as the escalation of hostilities before the usual diplomatic procedures have time to kick in.

The witnesses were speaking as the committee held its second public evidence session last week, with the aim of looking into how the development and deployment of autonomous weapons systems will impact the UK's foreign policy and its position on the global stage.

It was international affairs think tank Chatham House's Yasmin Afina, one of the first witnesses, who introduced the term "slow AI", suggesting that humans slow their roll before we begin a nosedive we can't pull up from.

She noted "the value of slow AI" compared to the "arms race dynamic that we are seeing at the moment."

Afina added: "I see ChatGPT, GPT-4 and Bard being developed in the large language models realm and I cannot keep up. The value of slow AI is that it would allow us more robust, thorough testing, evaluation, verification and validation processes that would enable the incorporation of ethical and legal standards within the development of the systems.

You need a lot of resource, whether financial or human, and not everyone has it. Only a handful of companies have access to these kinds of facilities.

"When you think about AI, we talk about non-state actors. We think about anyone who can do AI research.

"Yes, of course, anyone can do AI research and work on their computer, but, at the same time, if you want to run something that is highly advanced, you need high computing power and hardware. You need a lot of resource, whether financial or human, and not everyone has it. Only a handful of companies have access to these kinds of facilities. The value of slow AI is that we can also rethink the relationship we have vis-à-vis those companies that have the power to conduct this powerful AI research."

Concept art shows what the new fighter jet might look like in the skies crown copyright

Italy, Japan, UK to jointly launch sixth-gen fighter jet by 2035

READ MORE

The Lord Bishop of Coventry John Cocksworth had questions. "The [military] commander... may not always have the technical expertise that others would have at different stages of the development. How would that commander, in an evolving situation, with an evolving weapon, be able to calibrate the right human involvement? Would it change as the weapon changes?" he asked.

Vincent Boulanin, director of the Governance of Artificial Intelligence Programme at the Stockholm International Peace Research Institute (SIPRI), testified that adjustments would need to be made as the systems are adapted. "If we are talking about systems that keep learning, taking in new data and rechanging their parameters of use, that would be problematic. People would argue that this weapon would be inherently unlawful because you would need to do a new legal review to verify that the learning has not affected the performance in a way that would make the effect indiscriminate by nature."

Yes, but who does that thing belong to?

Witness Charles Ovink, Political Affairs Officer at the United Nations' Office for Disarmament Affairs, raised another point about Autonomous Weapons Systems (which Amazon won't be best pleased to discover was shortened to AWS throughout the hearing): "They have the potential to introduce elements of unpredictability at times of international tension. They can lead to actions that are difficult to attribute."

Ovink added that the "difficulty of attribution is an element that is likely to come up frequently today, creating risks for misunderstanding and unintended escalation, which I think you can also agree is a serious concern."

He also spoke of the problem of shortening the amount of time governments have to make a decision: "AI technologies have the potential to aid decision-makers by allowing faster real-time analysis of systems and data, and providing enhanced situational awareness.

"However, this presents its own concerns. It may compress the decision-making timeframe and lead to increased tensions, miscommunication and misunderstanding, including, of particular concern to my office, between nuclear weapon state."

Is it fair to those on the battlefield to pick them off with machines?

SIPRI's Boulanin then raised the issue of what could be seen by ethicists as a consideration of whether it's a fair fight, which sees its parallels in European law, where the GDPR says data subjects have the right not to be subject to a decision based solely on automated processing. Boulanin told the hearing: "There are also people who have this ethical perspective, although it is disputed, that it would be morally wrong to have an autonomous weapon to identify military persons on the battlefield. It would be a violation of the combatants' right to dignity. That point is highly contested in the policy discussion. That is for the humanitarian concern."

He also added that there was fear the systems "would not be reliable enough, or would fail in a way that would expose civilians. The system might misidentify a civilian as a lawful target, for instance, or not be able to recognize people who are hors de combat," who are protected under international humanitarian law.

The committee should also consider the matter that such tech would not only be harder to trace, but also might get into the hands of those conducting guerrilla warfare against an invading force, he added. "Some states might be incentivised to perhaps conduct operations that could lead to an armed conflict because they feel like, since it is a robotic system, attribution would be harder.

"I would point out here that it is not an AWS-specific problem. It is basically a problem with robots in general.

"The idea of these low-tech autonomous weapons that could be developed by a terrorist group or people who are just putting together technologies from the commercial sector clearly needs to be considered."

A message to you, Rudy

The British government brought out its policy paper on AI last week – on the same day that hundreds of computer scientists, industry types, and AI experts signed an open letter calling for a pause for at least six months in the training of AI systems more powerful than GPT-4.

Signatories included Apple co-founder Steve Wozniak, SpaceX CEO Elon Musk, IEEE computing pioneer Grady Booch, and more.

The letter was addressed by several speakers, most of whom seemed to think six months was not enough. And the well-funded companies building the technology did not escape notice either, with Chatham House's Afina noting: "Coming back to the commercial civilian sector, you have this arms race dynamic and urge to innovate and deploy technologies in order to have a cutting-edge advantage. I do not think that we are spared from this dynamic in the military sphere. For example, in Ukraine there is the deployment of technologies reportedly from Palantir and Clearview that are based on AI. These are highly experimental technologies."

The UN's Ovink added: "Even given the nature of the technology, a capacity that is scrupulously civilian may be perceived by neighbors as a kind of latent military capacity. Even if, to some degree, we are talking about a focus that is exclusively developing a significant domestic AI capacity, it will have an impact."

The companies themselves, and their potential legal culpability, was also questioned. "The nature of the companies that we have mentioned before means that not necessarily all those aspects are located within a single jurisdiction, whether we are talking about where the data is collected, where the servers are and those kinds of things," Ovink said.

"When you talk about delegation, the issue is also accountability. If those things were able to be demonstrated, so you had a system with no black box element that was completely explainable and we could understand why the decisions were made, there would still need to be an element of human accountability. That is the part we would wish to underline."

Lord Houghton of Richmond questioned this, claiming that from his point of view "there still is human accountability, because a human – ultimately a politician, a Minister – has given a directive that he is content to delegate to this particular piece of autonomous machinery in certain circumstances such that it can act on its own predetermined algorithms or whatever, so long as they are not a black box that nobody can understand."

Ovink responded: "In that case, the person making that decision would be legally responsible for the consequences of the decision."

Afina added that "there will always be a human accountable. That is for sure. It is more a question of choosing the appropriate contexts in which there would be more benefits than risks of deploying AI that may have certain levels of human control... or not."

You can watch the whole thing play out here

Reg readers who are UK residents and are interested in having their say have until April 14 to submit written evidence. ®

Bootnote:

The Register refers to quotes from both livestreamed and recorded footage which we cross-checked with a provided transcript. Members and witnesses may still avail themselves of an opportunity to correct the record, in which case we will update the piece.

More about

TIP US OFF

Send us news


Other stories you might like