When clever code kills, who pays and who does the time? A Brit expert explains to El Reg

Liability for artificial intelligence won't be easy


Analysis On September 26, 1983, Stanislav Petrov, an officer in the Soviet Union's Air Defense Forces, heard an alarm and saw that the warning system he'd been assigned to monitor showed the US had launched five nuclear missiles.

Suspecting an error with the system's sensors, he waited instead of alerting his superiors, who probably would have ordered a devastating retaliatory strike.

After a few tense minutes, Petrov's suspicion was vindicated when nothing happened. There was no American missile launch, and the world avoided a catastrophe through his decision to ignore the phantom warheads.

In an academic paper titled Artificial Intelligence and Legal Liability, Dr John Kingston, a senior lecturer in computers, engineering and mathematics at the University of Brighton in England, observed the errant warning was later attributed to an orbiting satellite mistaking the reflection of the Sun for missile heat signatures.

Kingston suspects an AI system would not have performed as well.

"If an AI system had been in charge of the Soviet missile launch controls that day, it may well have failed to identify any problem with the satellite, and launched the missiles," wrote Kingston. "It would then have been legally liable for the destruction that followed, although it is unclear whether there would have been any lawyers left to prosecute the case."

Kingston in his paper explored the issue of accountability for AI, which he defines as any system that can recognize a situation or event and then take action through an IF-THEN conditional statement.

Yes, it's a low bar. It's pretty much any software, and yet it's arguably one of the better definitions of AI because it avoids fruitless attempts to distinguish between what is and isn't intelligence.

Criminal liability

Kingston recounted the legal requirements for criminal liability under US law – an actus reus (an action or a failure to act) and a mens rea (a mental intent) – and examined how that might apply to AI software.

Pointing to a framework laid out in 2010 by Gabriel Hallevy, a professor at Israel's Ono Academic College, Kingston recounted three possible legal models for criminal liability.

If an AI program goes seriously awry, it might be treated by the courts as an entity with insufficient mental capacity for criminal intent. The software, and its makers and operators, would then be off the hook, unless it was found the programmers or its users had instructed the action.

Teach undergrads ethics to ensure future AI is safe – compsci boffins

READ MORE

Programmers might be held liable as accomplices to a criminal act by an AI if the crime was deemed a "natural or probable consequence" of the software's operation. Kingston recounted the story of a Japanese factory worker killed accidentally by a robot arm that mistook him for a motorcycle as an example of a problem that should have been foreseen.

Or such software might qualify for direct liability. However, that becomes complicated because while involvement in a criminal act can be established, intent may not be easy to prove.

Intent, however, isn't necessary in "strict liability" situations, such as exceeding the speed limit. A self-driving car could be blamed for speeding without the need to establish intent because the code should have prevented that.

Legal defenses

Kingston pointed out that finding an AI system criminally liable doesn't help clarify who to punish, and noted that legal defenses invoked for people – such as insanity, coercion or intoxication – may be applied to software in situations where, for example, a computer virus has been identified on the affected system.

In other words, a celeb can claim they went crazy and broke the law because they mixed up their medical prescriptions; a computer that goes berserk can claim it got a virus.

He then went on to analyze how civil law might apply. AI software could be guilty of negligence (a tort or civil wrong under the law) if the code had a duty to care for something and then failed to do so, resulting in some injury.

"The key question is perhaps whether the AI system recommends an action in a given situation (as many expert systems do), or takes an action (as self-driving and safety-equipped cars do)," Kingston explained.

There's also the unsettled issue of whether intelligent software performing tasks autonomously is a product – which often requires a warranty – or a service under the law, something that isn't necessarily clear.

Boffins: If AI eggheads could go ahead and try to stop their code being evil, that'd be great

READ MORE

Kingston said liability will depend on whether the limitations of AI systems are adequately communicated, whether the AI is a product or a service, and whether intent is an issue.

Speaking to The Register, Kingston suggested the difficulty of proving criminal intent with regard to AI software would make such cases rare.

"I expect society to treat failures by AI more as negligence (by someone or other), which is easier to demonstrate," he said, pointing to what he believes are the likely consequences for obvious coding errors.

"Failing to deal with well-known bugs would probably be seen legally as negligence to deal with issues that a reasonably competent programmer would have taken care of, so it's safe to say there's a strong probability that such failures would open programmers to any liability that only required negligence," he explained, adding that the specifics would depend upon the jurisdiction.

Kingston said it's possible society may demand more certification for those involved with code. If so, he said, "it won't just be programmers. Analysts, system designers, and testers may also need certification."

He said other approaches are also possible: "Safety-critical software systems are already developed either with a very careful program of design, testing and verification, or they use formal methods to generate proofs that the system meets the requirements. Of course, that assumes the requirements are correct and complete."

Kingston suggested he'd like to see AI products covered by professional indemnity insurance, just like human expert consultants. "That still leaves open the question of who should buy the insurance – inventor, designer, manufacturer or vendor," he said. ®

Similar topics


Other stories you might like

  • Google Pixel 6, 6 Pro Android 12 smartphone launch marred by shopping cart crashes

    Chocolate Factory talks up Tensor mobile SoC, Titan M2 security ... for those who can get them

    Google held a virtual event on Tuesday to introduce its latest Android phones, the Pixel 6 and 6 Pro, which are based on a Google-designed Tensor system-on-a-chip (SoC).

    "We're getting the most out of leading edge hardware and software, and AI," said Rick Osterloh, SVP of devices and services at Google. "The brains of our new Pixel lineup is Google Tensor, a mobile system on a chip that we designed specifically around our ambient computing vision and Google's work in AI."

    This latest Tensor SoC has dual Arm Cortex-X1 CPU cores running at 2.8GHz to handle application threads that need a lot of oomph, two Cortex-A76 cores at 2.25GHz for more modest workloads, and four 1.8GHz workhorse Cortex-A55 cores for lighter, less-energy-intensive tasks.

    Continue reading
  • BlackMatter ransomware gang will target agriculture for its next harvest – Uncle Sam

    What was that about hackable tractors?

    The US CISA cybersecurity agency has warned that the Darkside ransomware gang, aka BlackMatter, has been targeting American food and agriculture businesses – and urges security pros to be on the lookout for indicators of compromise.

    Well known in Western infosec circles for causing the shutdown of the US Colonial Pipeline, Darkside's apparent rebranding as BlackMatter after promising to go away for good in the wake of the pipeline hack hasn't slowed their criminal extortion down at all.

    "Ransomware attacks against critical infrastructure entities could directly affect consumer access to critical infrastructure services; therefore, CISA, the FBI, and NSA urge all organizations, including critical infrastructure organizations, to implement the recommendations listed in the Mitigations section of this joint advisory," said the agencies in an alert published on the CISA website.

    Continue reading
  • It's heeere: Node.js 17 is out – but not for production use, says dev team

    EcmaScript 6 modules will not stop growing use of Node, claims chair of Technical Steering Committee

    Node.js 17 is out, loaded with OpenSSL 3 and other new features, but it is not intended for use in production – and the promotion for Node.js 16 to an LTS release, expected soon, may be more important to most developers.

    The release cycle is based on six-monthly major versions, with only the even numbers becoming LTS (long term support) editions. The rule is that a new even-numbered release becomes LTS six months later. All releases get six months of support. This means that Node.js 17 is primarily for testing and experimentation, but also that Node.js 16 (released in April) is about to become LTS. New features in 16 included version 9.0 of the V8 JavaScript engine and prebuilt Apple silicon binaries.

    "We put together the LTS release process almost five years ago, it works quite well in that we're balancing [the fact] that some people want the latest, others prefer to have things be stable… when we go LTS," Red Hat's Michael Dawson, chair of the Node.js Technical Steering Committee, told The Register.

    Continue reading

Biting the hand that feeds IT © 1998–2021