Meet the man who inspired Elon Musk’s fear of the robot uprising

Nick Bostrom explains his AI prophecies of doom to El Reg


Exclusive Interview Swedish philosopher Nick Bostrom is quite a guy. The University of Oxford professor is known for his work on existential risk, human enhancement ethics, superintelligence risks and transhumanism. He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.

But he’s perhaps most famous these days for his book, Superintelligence: Paths, Dangers, Strategies, particularly since it was referenced by billionaire space rocket baron Elon Musk in one of his many tweets on the terrifying possibilities of artificial intelligence.

Prophecies of AI-fuelled doom from the likes of Musk, Stephen Hawking and Bill Gates hit the headlines earlier this year. They all fretted that allowing the creation of machine intelligence would lead to the extinction or dystopian enslavement of the human race.

References to The Terminator and Isaac Asimov abounded and anxious types were suddenly sweating over an event that most researchers reckon won't happen until somewhere between 2075 and 2090.

With these dire prophecies in mind, many have read Bostrom’s book as another grim missive, unremittingly pessimistic about our future under our machine overlords.

I'm sorry, Dave, I'm afraid I can't do that

2001: A Space Odyssey

Prof Bostrom tells The Register he’s not the pessimist that many have made him out to be, however.

“I think I have a more balanced view, I think that both outcomes are on the table, the extremely good and the extremely bad,” he says.

“But it makes sense to focus a lot on the possible downsides to see the work that we need to put in – that we haven’t been doing to date – to make sure that we don’t fall through any trapdoors. But I think that there’s a good chance we can get, if we get our act together, a really utopian future.”

In fact, Bostrom’s book isn’t a cut-and-dried analysis of how any machine intelligence would likely be an evil megabot intent on wiping out the human race. Much of the book focusses on how easy it would be for a machine intelligence to believe itself to be happily helping the human race by accomplishing the goal set out for it, but actually end up destroying us all in a problem he calls “perverse instantiation”.

For example, if we programme our AI to do something simple and narrow, such as manufacture paperclips, we could actually be setting ourselves up for a universe composed of nothing but paperclips.

What we mean is that we want the AI to build a few factories and find more efficient ways of making us money in our paperclip venture. But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.

'It’s not clear that our wisdom has kept pace with our increasing technological prowess.' – Bostrom

If we were to try for something a bit more complex, such as “Make humanity happy”, we could all end up as virtual brains hooked up to a source of constant stimulation of our virtual pleasure centres, since this is a very efficient and neat way to take care of the goal of making human beings happy.

Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do. This is just one example Bostrom gives of how hapless humanity could end up engineering its own destruction through AI.

There are many more, including the issue of who’s doing the programming.

Similar topics

Broader topics


Other stories you might like

  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading

Biting the hand that feeds IT © 1998–2022