This article is more than 1 year old
Meet the man who inspired Elon Musk’s fear of the robot uprising
Nick Bostrom explains his AI prophecies of doom to El Reg
How to solve a problem like paperclipped dystopia
Even if we come up with a way to control the AI and get it to do “what we mean” and be friendly towards humanity, who then decides what it should do and who is to reap the benefits of the likely wild riches and post-scarcity resources of a superintelligence that can get us out into the stars and using the whole of the (uninhabited) cosmos.
“We’re not coming from a starting point of thinking the modern human condition is terrible, technology is undermining our human dignity,” Bostrom says. “It’s rather starting from a real fascination with all the cool stuff that technology can do and hoping we can get even more from it, but recognising that there are some particular technologies that also could bring risks that we really need to handle very carefully.
“I feel a little bit like humanity is a bit like an infant or a teenager: some fairly immature person who has got their hands on increasingly powerful instruments. And it’s not clear that our wisdom has kept pace with our increasing technological prowess. But the solution to that is to try to turbo-charge the growth of our wisdom and our ability to solve global coordination problems. Technology will not wait for us, so we need to grow up a little bit faster.”
Bostrom believes that humanity will have to collaborate on the creation of an AI and ensure its goal is the greater good of everyone, not just a chosen few, after we have worked hard on solving the control problem. Only then does the advent of artificial intelligence and subsequent superintelligence stand the greatest chance of coming up with utopia instead of paperclipped dystopia.
But it’s not exactly an easy task.
“It looks like the thing that could help the most is to do more research into the control problem. Other things that would be helpful, like more world peace and harmony, would be great, it’s just harder to see how three extra people or an extra million dollars in funding would make a material difference to the amount of peace and harmony in the world, he says.
"So on the margin, it looks like money going to the control problem would be well spent."
Even things you would expect to help humanity towards becoming wiser and better people, such as greater global wealth, could be a double-edged sword when it comes to artificial intelligence.
“In general, economic growth does take some of the pressure off and make us more decent. Whether that economic growth comes from extracting more resources here on Earth or in space or making more efficient use of them doesn’t make much difference," Bostrom argues.
"Historically, there seems to be a correlation between countries becoming richer and having better rule of law and, in many ways, various metrics of civilisation have improved. So in that sense, faster economic growth is desirable.
“But on the other hand, it might speed the advance towards AI in that faster economic growth might lead to more investment in AI and computer science and that might give us less time to get our act together. The effect of the rate of economic growth on AI risk is ambiguous and hard to be sure about,” he points out.
Musk try harder – bring on the brains
Right now, Bostrom reckons it’s premature to be getting governments involved in international symposiums or global treaties. What’s needed is targeted research into issues like the control problem, decision theory – how the AI will make choices, and other problems that boffins are only just starting to grapple with. And if the researchers could be moralistic altruists as well, that would be terrific.
“People like Musk and Hawking are valuable mainly because they draw attention to the issues and can funnel resources. In the case of Elon Musk, he actually gave $10m to fund research in this area, which is extremely welcome.
SpaceX boss and AI-fearing Elon Musk takes a stroll with U.S. Pres Barak Obama
“But in terms of the people actually doing the research, what we need are highly talented people with mathematics backgrounds, theoretical computer science backgrounds, maybe some philosophy, working closely with practitioners in the field of AI, computer science and machine learning,” he says.
“Another variable obviously is that one would want the field to attract people that actually care about the long term future for humanity, that have the greater good at heart, as opposed to people that just want to make a quick buck or have some partisan interest.
“The combination of great cognitive power and altruistic motivation would be ideal and the more of those people that get into the field early, the more I think the culture of the field will be shaped,” he adds.
But what Bostrom doesn’t want is for research into AI to stop. He’s not trying to doom-say us out of technological progress. Rather, he just wants to make sure that the field is thinking about all of the risks.
“There’s a delicate balance there. It’s not so much doom-saying that AI will create a catastrophe and we should stop doing it, it’s more saying that, hey, there are problems here that nobody seems to be paying attention to.
“If we actually succeeded in creating machines that were intelligent, how would we ensure that they would be controlled and friendly? That’s a big problem that needs to be solved, but it’s been almost completely ignored until recently," the prof says.
"That’s really the message that we’re trying to put out there, which is quite different from saying technology is bad, let’s stop.” ®
Nick Bostrom is a philosopher at the University of Oxford and you can find out more about superintelligence, transhumanism and how we’re all living in a computer simulation on his webpage.