Pioneering physicist Stephen Hawking has said the creation of general artificial intelligence systems may be the "greatest event in human history" – but, then again, it could also destroy us.
In an op-ed in UK newspaper The Independent, the physicist said IBM's Jeopardy!-busting Watson machine, Google Now, Siri, self-driving cars, and Microsoft's Cortana will all "pale against what the coming decades will bring."
We are, in Hawking's words, caught in "an IT arms race fueled by unprecedented investment and building on an increasingly mature theoretical foundation."
These investments, whether made by huge companies such as Google or startups like Vicarious, have the potential to revolutionize our society. But Prof Hawking worries that though "success in creating AI would be the biggest event in human history. ... it might also be the last, unless we learn how to avoid the risks."
So inevitable is the rise of a general artificial intelligence system that Hawking cautioned that governments and companies are not doing nearly enough to prepare for its arrival.
"If a superior alien civilization sent us a message saying, 'We'll arrive in a few decades', would we just reply, 'OK, call us when you get here – we'll leave the lights on'? Probably not – but this is more or less what is happening with AI," Hawking wrote.
The only way to stave off a societal meltdown when AI arrives is to devote serious research at places such as Cambridge's Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute, he said.
Hawking's view is not a fringe one. When Google acquired AI company "DeepMind" earlier this year, its employees are reported to have made the creation of an internal ethics board a condition of the acquisition.
Similarly, in their book The Second Machine Age, academics Erik Brynjolfsson and Andrew McAfee have cautioned that the automation possibilities afforded by new artificial intelligence systems pose a profound threat to the political stability of the world unless governments figure out what to do with the employment disruptions that major AI will trigger.
But for all the worries Hawking displays, it's worth noting that a general artificial intelligence may yet be a long way off. In our own profile of AI pioneer Jeff Hawkins, the Palm founder said what his company is working on today "is maybe five per cent of how humans learn."
Considering that Jeff Hawkins' work is considered by experts – including Google's own Director of Research Peter Norvig – to be very advanced, that throws a bit of cold water on Hawking's fiery proclamations.
On the other hand, when Jeff Hawkins told us about his own work, he made a comment that gives grist to Stephen Hawking's worries.
"It's accelerating," Jeff Hawkins told us. "These things are compounding, and it feels like these things are all coming together very rapidly." Perhaps time, as he says, is running out on us to figure out what to do about the coming AI invasion. ®