Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

'You're more likely to die from robots than cancer'


Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.

A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.

Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.

Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).

Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.

Humankind's progress is now marked less by evolutionary processes and more by technological progress, which allows people to live longer, accomplish tasks more quickly and destroy more or less at will.

Both Price and Tallinn said they believe the rising curve of computing complexity will eventually lead to AGI, and that the critical turning point after that will come when the AGI is able to write the computer programs and create the tech to develop its own offspring.

2001 HAL poster

“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”

CSER hopes to gather experts from policy, law, risk, computing and science to advise the centre and help with investigating the risks.

“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” Price said.

“Nature didn’t anticipate us, and we in our turn shouldn’t take artificial general intelligence (AGI) for granted.

"We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.” ®

Similar topics

Broader topics


Other stories you might like

  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • For the average AI shop, sparse models and cheap memory will win
    Massive language models aren't for everyone, but neither is heavy-duty hardware, says AI systems maker Graphcore

    As compelling as the leading large-scale language models may be, the fact remains that only the largest companies have the resources to actually deploy and train them at meaningful scale.

    For enterprises eager to leverage AI to a competitive advantage, a cheaper, pared-down alternative may be a better fit, especially if it can be tuned to particular industries or domains.

    That’s where an emerging set of AI startups hoping to carve out a niche: by building sparse, tailored models that, maybe not as powerful as GPT-3, are good enough for enterprise use cases and run on hardware that ditches expensive high-bandwidth memory (HBM) for commodity DDR.

    Continue reading

Biting the hand that feeds IT © 1998–2022