Regulate, says Musk – OK, but who writes the New Robot Rules?

Cause, accountability, responsibility


When the Knightscope K5 surveillance bot fell into the pond at an office complex in Washington, DC, last month, it wasn’t the first time the company’s Future of Security machines had come a cropper.

In April, a K5 got on the wrong side of a drunken punch but still managed to call it in, reinforcing its maker’s belief that the mobile security unit resembling Star Wars’ R2D2 has got, err, legs. However, while a robot rolling the wrong way into a pool of water may not exactly be life-threatening, increased automation, robots and AI-enabled machinery will touch lives, from autonomous vehicles through to shelf-stackers in supermarkets and even home care assistants.

So, what happens when robots and automation go wrong and who is responsible? If a machine kills a person, how far back does culpability go and what can be done about it?

“Current product liability and safety laws are already quite clear on putting the onus on the manufacturers of the product or automated systems, as well as on the distributors and businesses that supply services for product safety,” says Matthew Cockerill of London-based product design firm Seymourpowell.

He’s right of course. Product liability and safety laws already exist – the UK government is unequivocal on the matter – but we are talking here about technology that can learn to adapt, that is taking automation outside of the usual realms of business. Surely this can throw up a different set of circumstances and a different set of liabilities?

“I’d expect, certainly in the short term, the major difficulties to be around determining the liability from a specific accident or determining if an automated system has really failed or performed well,” adds Cockerill. “If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”

Good question, although if a machine takes any life it is surely a fail. In this scenario, who would be to blame? Would developers, for example, be liable?

Urs Arbter, a partner at consultancy firm Roland Berger, suggests that in some cases this may happen. He says that in particular: “AI is reshaping the insurance industry,” and although he believes risk will decline with increased automation, especially with autonomous vehicles, “there could be some issues against developers.” Insurance companies he says are watching it all closely and although regional requirements will vary depending on local laws, there is, he says, room for further regulation.

Elon Musk would agree. A recent Tweet by the Tesla founder claimed that AI is now riskier than North Korea. He followed it up with another tweet saying “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.”

Easier said than done, but according to Chi Onwurah, UK Labour MP for Newcastle Central and Shadow Minister for Industrial Strategy, Science and Innovation, it’s not only Musk who has suggested that regulators and legislators need to consider AI. She points to Murray Shanahan (professor of cognitive robotics at Imperial College London), Chetan Dube (founder of IPsoft), Cathy O’Neil (author and mathematician) and many others, herself included, as believing that we need to reference AI in deciding how our regulatory and legislative framework needs to evolve.

“This is not ‘regulating against a potential threat,’ but protecting consumers, citizens, workers now and in the future, which is the job of government,” Onwurah told us. “Good regulation is always forward looking otherwise it is quickly obsolete, and the current regulation around data and surveillance is a prime example of that.”

She suggests there is a precedent too, referring to when communications regulator Ofcom regulated for the convergence of telecoms, audiovisual and radio before it happened.

“There was a long period of debate and discussion with a green paper and a white paper before the 2003 Communications Act was passed, with the aim of looking forward ten years and anticipating some of the threats as well as the opportunities,” says Onwurah.

“This government unfortunately has neither the will nor the intellectual capacity to look forward ten weeks, and as a consequence any AI regulation is likely to be driven by the European Union or knee-jerk reactions to bad tabloid headlines.”

Knee jerk is something we are used to – we’ve seen a lot of it recently in reaction to growing cyber security threats – but still, should we be going unilateral on this? Regulation seems a little pointless in the wider AI scheme of things if it’s not multilateral and we are a long way off that being discussed, let alone becoming a potential reality.

Next page: Starting small

Similar topics

Broader topics


Other stories you might like

  • World’s smallest remote-controlled robots are smaller than a flea
    So small, you can't feel it crawl

    Video Robot boffins have revealed they've created a half-millimeter wide remote-controlled walking robot that resembles a crab, and hope it will one day perform tasks in tiny crevices.

    In a paper published in the journal Science Robotics , the boffins said they had in mind applications like minimally invasive surgery or manipulation of cells or tissue in biological research.

    With a round tick-like body and 10 protruding legs, the smaller-than-a-flea robot crab can bend, twist, crawl, walk, turn and even jump. The machines can move at an average speed of half their body length per second - a huge challenge at such a small scale, said the boffins.

    Continue reading
  • IBM-powered Mayflower robo-ship once again tries to cross Atlantic
    Whaddayaknow? It's made it more than halfway to America

    The autonomous Mayflower ship is making another attempt at a transatlantic journey from the UK to the US, after engineers hauled the vessel to port and fixed a technical glitch. 

    Built by ProMare, a non-profit organization focused on marine research, and IBM, the Mayflower set sail on April 28, beginning its over 3,000-mile voyage across the Atlantic Ocean. But after less than two weeks, the crewless ship broke down and was brought back to port in Horta in the Azores, 850 miles off the coast of Portugal, for engineers to inspect.

    With no humans onboard, the Mayflower Autonomous Ship (MAS) can only rely on its numerous cameras, sensors, equipment controllers, and various bits of hardware running machine-learning algorithms to survive. The computer-vision software helps it navigate through choppy waters and avoid objects that may be in its path.

    Continue reading
  • Revealed: The semi-secret list of techs Beijing really really wishes it didn't have to import
    I think we can all agree that China is not alone in wishing it had an alternative to Microsoft Windows

    China has identified "chokepoints" that leave it dependent on foreign countries for key technologies, and the US-based Center for Security and Emerging Technology (CSET) claims to have translated and published key document that name the technologies about which Beijing is most worried.

    CSET considered 35 articles published in Science and Technology Daily from April until July 2018. Each story detailed a different “chokepoint” or tech import dependency that China faces. The pieces are complete with insights from Chinese academics, industry insiders and other experts.

    CSET said the items, which offer a rare admission of economic and technological vulnerability , have hitherto “largely unnoticed in the non-Chinese speaking world.”

    Continue reading

Biting the hand that feeds IT © 1998–2022