This article is more than 1 year old
Why, Robot? Understanding AI ethics
Maybe we're headed for a robo-pocalpse, but let's deal with these other problems first, eh?
Let's leave government out of this...
They might have to make good any damage they may cause, and could even apply “electronic personality” to cases where they act autonomously. How much should governments regulate these issues? Not much, says UPenn’s Smith.
“I think we should start with self-regulation, and the reason is that technology is evolving so rapidly,” he says. “The political process tends to be reactive and lag the technology process.” Governments should step in if the private sector makes a hash of it, he argues.
This is the direction it’s already taking. Google’s Deepmind has an ethics panel on AI, but it has drawn flak for being opaque. Elon Musk formed OpenAI to research "friendly AI", while Google, Microsoft, Amazon, IBM and Microsoft formed the Partnership on AI to benefit people and society.
Others are also working on this. Firth-Butterfield is vice-chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The IEEE is currently working on several standards including P7001, which outlines transparency in autonomous systems. Its ethical initiative has also produced a guidance document on ethically-aligned AI design that prioritises human wellbeing.
There is no shortage of guidelines and ethics research efforts to choose from. The high-profile Future of Life Institute, which sports Stephen Hawking, Elon Musk and others among its supporters, has published the Asilomar AI principles, while the British Standards Institute created BS8611, Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems.
Many of these proposed regulations and research efforts explore the ethical implications of AI now, and imagined. One idea involves creating a "kill switch" that could stop the singularity concept – the runaway recursive development of AI that just keeps bettering itself until it loops us out of existence.
That’s a concept that some refuse to address, including the authors of this whopping Stanford report on AI in 2030. They plan to update their report every five years for the next century, though, so it may surface later. Others, like Torrance, are keeping an eye on it. “I regard it as something that's important to be aware of as a danger in the mid to long term,” he says.
Along the road to the singularity would be strong AI. If that becomes a thing, some of the ethical discussions become more complex because AI would be dealing with more nuanced issues, just as we do.
Erder is sceptical that this will happen, but as a philosopher, she questions the idea of concrete ethical guidelines that don’t allow room for manoeuvre. She raises squishy ethical questions like whether it’s ok to lie.
No, it isn’t. Oh, really? What about in this situation, where you’re lying to save someone’s feelings? What about to save a life? What does lying mean, anyway? Can you lie by staying silent?
These are the kinds of Socratic conversations an enlightened parent might have with their kids as they teach them that things aren’t always as binary as they might think. And they’re things that make lists of rigid ethical guidelines difficult.
Some of the ethical concepts that may make their way into AI debates have been with us in one form or another since the Sophists, and we still haven’t perfected ourselves. We’re filled with our own biases. We discriminate against each other all the time, knowingly and unknowingly. We’d be less capable than an automated car of making the right decision in an accident – and there may not even be any firm rules on what the right decision was anyway.
Given that we can barely set and meet our own standards, should we worry that much about imposing them on the digital selves that may one day come after us?
Erden thinks so. “Ethics happens in the middle ground, where we accept that we’re not going to give up, but we’re not going to establish something clearly and finally and completely,” she says.
“So we have to manage the mess as best we can. The mess is beautiful, in lots of ways.” ®
We'll be covering machine learning, AI and analytics - and ethics - at MCubed London in October. Full details, including early bird tickets, right here.
*Yes, which was later included in the 1950 short story collection...