Will AI spell the end of humanity? The tech industry wants you to think so

Or is that what the Matrix wants us to think?


Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity. Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.

So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?

There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks. However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real. For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity. Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm.

Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.

The role of the tech industry

Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement? The cynic might say that the AI doomsday vision has taken on religious proportions. Of course, doomsday visions usually come with a path to salvation. Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories. And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.

The yellow robot arms dance through an assembly demo for Elon Musk and the rest of the tour group that visited the reopening of the former NUMMI plant, now Tesla Motors. photo by Steve Jurvetson licensed under CC 2.0 must attribute

Elon Musk is concerned about a robot future. Steve Jurvetson - Flickr: FANUC Robot Assembly Demo, CC BY-SA

Tech giants have cast themselves as modern gods with the power to either extinguish humanity or make us immortal through their brilliance. This binary vision is buoyed in the tech world because it feeds egos – what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends? No longer are tech figures cast as mere business leaders, but instead as the chosen few who will determine the future of humanity and beyond.

For Judgement Day researchers, proclamations of an “existential threat” is not just a call to action, but also attracts generous funding and an opportunity to rub shoulders with the tech elite.

So, are smart machines more likely to kill us, save us, or simply drive us to work? To answer this question, it helps to step back and look at what is actually happening in AI.

Underneath the hype

The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach. Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.

In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity.

The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, this essentially follows the arc of history where humans use available technologies to kill one another.

There are real dangers from AI but they tend to be economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job. There will not be a flood of replacement “AI repair person” jobs to take up the slack. So the real challenge will be how to properly assist those (most of us?) who are displaced by AI. Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.

Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice. A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns. The take-home message was that AI will make industry more efficient, but may also destabilise society.

If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.

The Conversation

This article was originally published on The Conversation.

Similar topics


Other stories you might like

  • Want to buy your own piece of the Pi? No 'urgency' says Upton of the listing rumours

    A British success story... what happens next?

    Industry talk is continuing to circulate regarding a possible listing for the UK makers of the diminutive Raspberry Pi computer.

    Over the weekend, UK newspaper The Telegraph reported that a spring listing could be in the offing, with a valuation of more than £370m slapped onto the computer maker.

    Pi boss, Eben Upton, described the article as "interesting" in an email to The Register today, before repeating that "we're always looking at ways to fund the future growth of the business, but the $45m we raised in September has taken some of the urgency out of that."

    Continue reading
  • JetBrains embraces remote development with new IDE for multiple programming languages

    Security, collaboration, flexible working: Fleet does it all, says project lead

    JetBrains has introduced remote development for its range of IDEs as well as previewing a new IDE called Fleet, which will form the basis for fresh tools covering all major programming languages.

    JetBrains has a core IDE used for the IntelliJ IDEA Java tool as well other IDEs such as Android Studio, the official programming environment for Google Android, PyCharm for Python, Rider for C#, and so on. The IDEs run on the Java virtual machine (JVM) and are coded using Java and Kotlin, the latter being primarily a JVM language but with options for compiling to JavaScript or native code.

    Fleet is "both an IDE and a lightweight code editor," said the company in its product announcement, suggesting perhaps that it is feeling some pressure from the success of Microsoft's Visual Studio Code, which is an extensible code editor. Initial language support is for Java, Kotlin, Go, Python, Rust, and JavaScript, though other languages such as C# will follow. Again like VS Code, Fleet can run on a local machine or on a remote server. The new IDE uses technology developed for IntelliJ such as its code-processing engine for features such as code completion and refactoring.

    Continue reading
  • Nextcloud and cloud chums fire off competition complaint to the EU over Microsoft bundling OneDrive with Windows

    No, it isn't the limited levels of storage that have irked European businesses

    EU software and cloud businesses have joined Nextcloud in filing a complaint with the European Commission regarding Microsoft's alleged anti-competitive behaviour over the bundling of its OS with online services.

    The issue is OneDrive and Microsoft's habit of packaging it (and other services such as Teams) with Windows software.

    Nextcloud sells on-premises collaboration platforms that it claims combine "the convenience and ease of use of consumer-grade solutions like Dropbox and Google Drive with the security, privacy and control business needs." Microsoft's cloud storage system, OneDrive, is conspicuous by its absence.

    Continue reading

Biting the hand that feeds IT © 1998–2021