Multibillionaire tech ace Elon Musk has a bee in his bonnet about the threat to humanity from ... artificial intelligence. And since he's a major investor in the technology, he ought to know.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.— Elon Musk (@elonmusk) August 3, 2014
Musk's fears, as he suggests in other book recommendations, are that once mankind invents a working AI system, the computers may well decide we're surplus to requirements – and dump us faster than a corporate getting rid of a legacy POTS exchange.
The idea is much beloved by speculative fiction writers and some serious technologists, in terms of predicting doom and those who take the opposite view – such as Iain M. Banks who suggested in Culture that sentient computers would look after their creators and let them lead cosseted existences.
Musk's comments are slightly concerning, however, given he is directly involved in the machine intelligence business. In March, the SpaceX and Tesla supremo, along with Facebook's Mark Zuckerberg, pumped $40m into AI software firm Vicarious – which is seeking to virtualize the neocortex of a human brain so that computers can develop their own intelligence.
Like Jeff Hawkins' own neocortex-like software, Vicarious's designs are strikingly different to classic neural network models of the brain.
In his book Singularity is near, futurologist Ray Kurzweil estimates that, by 2045, an AI whose intelligence outstrips its human creators will emerge as a result of breakthroughs in the way we implement powerful digital grey matter.
Musk is in a position to know what's going on based on his close involvement in the scene. Maybe it's time to take a leaf out of William Gibson's 30-year-old AI masterpiece Neuromancer and create a Turing police who would drop a digital nine in the dome of any potential AI system. ®