Exclusive Interview Swedish philosopher Nick Bostrom is quite a guy. The University of Oxford professor is known for his work on existential risk, human enhancement ethics, superintelligence risks and transhumanism. He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.
But he’s perhaps most famous these days for his book, Superintelligence: Paths, Dangers, Strategies, particularly since it was referenced by billionaire space rocket baron Elon Musk in one of his many tweets on the terrifying possibilities of artificial intelligence.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.— Elon Musk (@elonmusk) August 3, 2014
Prophecies of AI-fuelled doom from the likes of Musk, Stephen Hawking and Bill Gates hit the headlines earlier this year. They all fretted that allowing the creation of machine intelligence would lead to the extinction or dystopian enslavement of the human race.
References to The Terminator and Isaac Asimov abounded and anxious types were suddenly sweating over an event that most researchers reckon won't happen until somewhere between 2075 and 2090.
With these dire prophecies in mind, many have read Bostrom’s book as another grim missive, unremittingly pessimistic about our future under our machine overlords.
I'm sorry, Dave, I'm afraid I can't do that
Prof Bostrom tells The Register he’s not the pessimist that many have made him out to be, however.
“I think I have a more balanced view, I think that both outcomes are on the table, the extremely good and the extremely bad,” he says.
“But it makes sense to focus a lot on the possible downsides to see the work that we need to put in – that we haven’t been doing to date – to make sure that we don’t fall through any trapdoors. But I think that there’s a good chance we can get, if we get our act together, a really utopian future.”
In fact, Bostrom’s book isn’t a cut-and-dried analysis of how any machine intelligence would likely be an evil megabot intent on wiping out the human race. Much of the book focusses on how easy it would be for a machine intelligence to believe itself to be happily helping the human race by accomplishing the goal set out for it, but actually end up destroying us all in a problem he calls “perverse instantiation”.
For example, if we programme our AI to do something simple and narrow, such as manufacture paperclips, we could actually be setting ourselves up for a universe composed of nothing but paperclips.
What we mean is that we want the AI to build a few factories and find more efficient ways of making us money in our paperclip venture. But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.
'It’s not clear that our wisdom has kept pace with our increasing technological prowess.' – Bostrom
If we were to try for something a bit more complex, such as “Make humanity happy”, we could all end up as virtual brains hooked up to a source of constant stimulation of our virtual pleasure centres, since this is a very efficient and neat way to take care of the goal of making human beings happy.
Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do. This is just one example Bostrom gives of how hapless humanity could end up engineering its own destruction through AI.
There are many more, including the issue of who’s doing the programming.