This article is more than 1 year old

Regulate, says Musk – OK, but who writes the New Robot Rules?

Cause, accountability, responsibility

Starting small

It will probably start small, such as the UK gov’s recent guidelines (not regulations) on dealing with potential cyber-attacks on smart, connected cars, or the expected Autonomous and Electric Vehicles Bill to be introduced later this year aiming to create a new framework for self-driving vehicle insurance.

The fear of course is that politicians don’t go deep enough and leave plenty of easily exploited loopholes or even restrict everything to the extent it interferes in the development of AI. When you think about it, regulating against a potential threat, rather than reacting to an existing one, is unusual and perhaps unprecedented.

“Regulations that impede progress are rarely a good thing, if ‘we’ believe that progress to have an overall benefit to society,” warns Karl Freund, senior analyst for HPC and deep learning at Moor Insights and Strategy. OK, so what happens if something goes wrong? Would governments be held to account for not regulating?

“Perhaps an analogy might help,” explains Freund. “If your brake lights fail, and a car crashes into you, who is at fault? The other driver, right? He should have been more careful and not totally relied on the technology of the tail light. If the autopilot of an ADAS equipped car fails, we may want to sue someone, but I am pretty certain these systems will warn the driver that they are not fool proof, and that the driver engages the autopilot with that understanding.

And of course, most analysts, if not all, would agree that these systems will save thousands or even tens of thousands of lives every year once widely deployed, with a very small and acceptable error rate. Just like a vaccine can save lives, but a small percentage of patients may experience adverse side effects. But that risk is worth the benefit to the total population.”

Ah, the greater good. Freund makes an understandable point that there will probably be waivers, something that Arbter adds could lead to more personalised insurance policies with premiums to match. Arbter adds that this should not mean increases in prices, but you kind of get the feeling that someone, somewhere will pay for it all – those who probably can afford it least.

So if a machine goes wrong how will we really know the culprit?

According to Alan Winfield, professor of Robot Ethics at the Bristol Robotics Laboratory, part of the University of the West of England, this is where his ethical black box idea comes in. Robots and autonomous systems, he says, should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. The idea is that it can establish cause, accountability and responsibility in the event of an accident caused by a robot or AI-enabled machine.

“It should always be possible to find out why an AI made an autonomous decision,” says Winfield, referring to it as “the principle of transparency”, one of a set of ethical principles being developed by the IEEE.

No hiding then and that’s sort of the point here. Why should AI be treated any differently than humans? If the AI contravenes already established law, then the owner of the machine and the developer – if it’s proven that guidelines and regulations were not adhered to – should be held to account. If things do go wrong, someone must have to pay. That’s the system.

Aziz Rahman of business crime solicitors Rahman Ravelli agrees. While he believes the rise of technology and artificial intelligence does make large changes to the ways we work possible, he thinks that when it comes to AI and fraud, the risks have to be assessed and minimised by companies in exactly the same way as any more conventional threat.

“If we are talking about future situations where the technology is intelligent enough to commit fraud, this possibility has to be recognised and prevented. This means introducing measures that prevent one particular person – or robot in the future situations we are talking about – having the ability to work free from scrutiny. If there is no such scrutiny, the potential for fraud will always be there.”

You could take that further. If there is no scrutiny of AI, there will be chaos and developers of AI will no doubt be the target of any government intervention. There again, who will governments consult over regulation, to get an understanding of AI’s potential and limitations?

Developers, of course. ®

More about

TIP US OFF

Send us news


Other stories you might like