This article is more than 1 year old

When clever code kills, who pays and who does the time? A Brit expert explains to El Reg

Liability for artificial intelligence won't be easy

Analysis On September 26, 1983, Stanislav Petrov, an officer in the Soviet Union's Air Defense Forces, heard an alarm and saw that the warning system he'd been assigned to monitor showed the US had launched five nuclear missiles.

Suspecting an error with the system's sensors, he waited instead of alerting his superiors, who probably would have ordered a devastating retaliatory strike.

After a few tense minutes, Petrov's suspicion was vindicated when nothing happened. There was no American missile launch, and the world avoided a catastrophe through his decision to ignore the phantom warheads.

In an academic paper titled Artificial Intelligence and Legal Liability, Dr John Kingston, a senior lecturer in computers, engineering and mathematics at the University of Brighton in England, observed the errant warning was later attributed to an orbiting satellite mistaking the reflection of the Sun for missile heat signatures.

Kingston suspects an AI system would not have performed as well.

"If an AI system had been in charge of the Soviet missile launch controls that day, it may well have failed to identify any problem with the satellite, and launched the missiles," wrote Kingston. "It would then have been legally liable for the destruction that followed, although it is unclear whether there would have been any lawyers left to prosecute the case."

Kingston in his paper explored the issue of accountability for AI, which he defines as any system that can recognize a situation or event and then take action through an IF-THEN conditional statement.

Yes, it's a low bar. It's pretty much any software, and yet it's arguably one of the better definitions of AI because it avoids fruitless attempts to distinguish between what is and isn't intelligence.

Criminal liability

Kingston recounted the legal requirements for criminal liability under US law – an actus reus (an action or a failure to act) and a mens rea (a mental intent) – and examined how that might apply to AI software.

Pointing to a framework laid out in 2010 by Gabriel Hallevy, a professor at Israel's Ono Academic College, Kingston recounted three possible legal models for criminal liability.

If an AI program goes seriously awry, it might be treated by the courts as an entity with insufficient mental capacity for criminal intent. The software, and its makers and operators, would then be off the hook, unless it was found the programmers or its users had instructed the action.

Teach undergrads ethics to ensure future AI is safe – compsci boffins

READ MORE

Programmers might be held liable as accomplices to a criminal act by an AI if the crime was deemed a "natural or probable consequence" of the software's operation. Kingston recounted the story of a Japanese factory worker killed accidentally by a robot arm that mistook him for a motorcycle as an example of a problem that should have been foreseen.

Or such software might qualify for direct liability. However, that becomes complicated because while involvement in a criminal act can be established, intent may not be easy to prove.

Intent, however, isn't necessary in "strict liability" situations, such as exceeding the speed limit. A self-driving car could be blamed for speeding without the need to establish intent because the code should have prevented that.

Legal defenses

Kingston pointed out that finding an AI system criminally liable doesn't help clarify who to punish, and noted that legal defenses invoked for people – such as insanity, coercion or intoxication – may be applied to software in situations where, for example, a computer virus has been identified on the affected system.

In other words, a celeb can claim they went crazy and broke the law because they mixed up their medical prescriptions; a computer that goes berserk can claim it got a virus.

He then went on to analyze how civil law might apply. AI software could be guilty of negligence (a tort or civil wrong under the law) if the code had a duty to care for something and then failed to do so, resulting in some injury.

"The key question is perhaps whether the AI system recommends an action in a given situation (as many expert systems do), or takes an action (as self-driving and safety-equipped cars do)," Kingston explained.

There's also the unsettled issue of whether intelligent software performing tasks autonomously is a product – which often requires a warranty – or a service under the law, something that isn't necessarily clear.

Boffins: If AI eggheads could go ahead and try to stop their code being evil, that'd be great

READ MORE

Kingston said liability will depend on whether the limitations of AI systems are adequately communicated, whether the AI is a product or a service, and whether intent is an issue.

Speaking to The Register, Kingston suggested the difficulty of proving criminal intent with regard to AI software would make such cases rare.

"I expect society to treat failures by AI more as negligence (by someone or other), which is easier to demonstrate," he said, pointing to what he believes are the likely consequences for obvious coding errors.

"Failing to deal with well-known bugs would probably be seen legally as negligence to deal with issues that a reasonably competent programmer would have taken care of, so it's safe to say there's a strong probability that such failures would open programmers to any liability that only required negligence," he explained, adding that the specifics would depend upon the jurisdiction.

Kingston said it's possible society may demand more certification for those involved with code. If so, he said, "it won't just be programmers. Analysts, system designers, and testers may also need certification."

He said other approaches are also possible: "Safety-critical software systems are already developed either with a very careful program of design, testing and verification, or they use formal methods to generate proofs that the system meets the requirements. Of course, that assumes the requirements are correct and complete."

Kingston suggested he'd like to see AI products covered by professional indemnity insurance, just like human expert consultants. "That still leaves open the question of who should buy the insurance – inventor, designer, manufacturer or vendor," he said. ®

More about

TIP US OFF

Send us news


Other stories you might like