RotM Robots will destroy humanity unless we write new laws to control them, a UK Parliamentary committee has been told.
“The key question is: if something goes wrong, who is responsible?” pondered the Commons Select Committee for Science and Technology, in a report released today.
Microsoft's Dave Coplin, the firm's “chief envisioning officer” was quoted as saying that a governmental “safety net” was needed to protect humanity from the Rise of the Machines.
According to Future Advocacy, an organisation “working on the greatest challenges humanity faces” and which, according to its website, chips in on topics ranging from starving kids to AI and seemingly anything and everything in between, AI could “have the power to kill without any human intervention”.
Google's Deepmind AI division also stuck its oar in, telling Parliamentarians: “We support a ban by international treaty on lethal autonomous weapons systems that select and locate targets and deploy lethal force against them without meaningful human control.”
Meanwhile, in the real world, the Ministry of Defence is pottering about off Scotland with a mixed flotilla of semi-autonomous undersea drones, the majority of which are designed to clear sea minefields and provide surveillance of the ocean. Terminator it ain't. The ministry gave the committee its usual boilerplate statement about everything it does being compliant with international law.
The UK doesn't yet have anything like an autonomous weapons capability. The will to commission R&D into this area – publicly, at least – hasn't been seen, chiefly because sensible people know that “AI” consists of computer programmes that are really, really good at obscure board games and not a whole lot else, for now. The nearest we've got to robot war machines in Blighty is self-driving boats, submersibles and fragile surveillance drones.
Nonetheless, taking legal measures now to prevent the Rise of the Machines later on would be no bad thing. We don't want to wake up in a world run by Talkie Toaster, after all. ®