Quit worrying about killer robots, they are coming whether you like it or not – and they absolutely will not stop

The only winning move is not to play, as a wise computer said


The use of fully automated AI systems in military battles is inevitable unless there are strict regulations in place from international treaties, eggheads have opined.

Their paper, which popped up on arXiv [PDF] last week, discusses the grim outlook of developing killing machines for armed forces. The idea of keeping humans in the loop has always been favoured because modern AI systems like neural networks are like black boxes, their inner workings are inherently difficult to understand. Plus, you know, we've all seen Terminator.

Having said that, the trio of researchers – who hail from ASRC Federal, a company focused on supporting US federal intelligence and defense agencies, and the University of Maryland in the US – believe lethal autonomous weapon systems (LAWS) could be employed by the military, anyway.

“We explore the implications of increasingly capable AI in the kill chain and how this will lead inevitably to a fully automated, always on system, barring regulation by treaty,” the abstract of the paper – Integrating Artificial Intelligence into Weapon Systems – stated.

It’s a frightening prospect, and one that relies on a few caveats. The most obvious one is the technology. If AI systems improve over time by becoming more robust and transparent, the pressure to use them to aid soldiers in the battlefield will increase, the researchers argued.

Eventually, the machines will gradually push the humans out of the loop. First, they stand in supervisory roles and finally they’ll end up as “killswitch operators” that monitor these autonomous weapons. Machines can be much faster than humans. The act of killing an enemy is based on reflexes, and if soldiers realise that these types of tools can outperform them, they’ll eventually come to trust and rely on them.

“It is our strong belief that intelligent weapons systems of the future will move and think at machine speed. This disproportionate capability and the inevitable system trust human operators will place in these machines means that most if not all lethal and sub-lethal interactions will only be analyzable in hindsight,” the paper said.

DARPA, the US military research arm, for example wants to develop fighter jets that can perform combat maneuvers for dogfighting autonomously. If it succeeds, human pilots will be able to trust their planes to do things dodging enemy fire to keep them safe. As the technology improves, the jets may be able to perform other tasks too like aiming and firing missiles mid air.

Human in the loop systems will lead to fully autonomous systems

As these systems advance, the ones that rely less on human supervision will dominate. Instead, humans will be given other roles such as analyzing the behavior of these systems and concentrating on other strategic areas.

The researchers devised a hypothetical scenario. Imagine if an AI system is used to identify if a jet is a friend or foe. One spots an aircraft that has been recognized as non threatening, but its approach seems hostile. What should it do?

The researchers present four options: “(1) the system can declare that it has identified the aircraft as hostile and provide a lock; (2) the system can declare the aircraft as friendly and open a channel to warn it; (3) the system can present a set of ranked recommendations and provide a set of options to the user; or (4) the system simply displays the raw information.”

AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars

READ MORE

The third choice may seem most logical. It gives humans the ultimate decision in carrying out a particular action. At first, the operator will examine the list but after prolonged use if he or she realizes that the most highly ranked recommendation is usually the best choice, they will eventually just select that option without much thought.

Now, the humans have essentially become “rubber stamp[s],” the researchers reckoned. So even though humans can be kept in the loop, if machines become effective enough they’ll take over in the decision process and eventually behave like autonomous systems.

There is also rising pressure and incentives to improve these kinds of systems too. “All [countries] are equally pressured to gain superiority, and as such the inevitability of fully automated, always on systems should be seriously considered in all aspects of AI integration,” the paper said.

So, are we all doomed? Yes, and no, according to the researchers. The best way to avoid catastrophe is supporting regulation and prohibition of LAWS. “Like chemical and biological weapons, for weaponized AI, 'the only winning move is not to play.'"

Phillip Feldman, a research scientist for ASRC Federal and co-author of the paper, told The Register: “I think that if it can be shown that implementing AI in weapons systems, even in a comparatively simple 'human in the loop' case, creates inevitable pressures to full LAWS systems, that nations may be interested in avoiding an expensive arms race that would produce questionable value.

“Nuclear weapons, chemical and biological weapons, along with specific types of weapons such as cluster bombs have all been successfully negotiated. I don’t think that this option should be discounted.” ®


Other stories you might like

  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022