Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Hawking and friends: Artificial Intelligence 'must do what we want it to do'

*Cough* Don't enslave us *cough*

Rise of the Machines More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek - have added their names to an open letter calling for greater caution in the use of artificial intelligence.

The letter was penned by the Future of Life Institute, a volunteer-run organisation with the not-insubstantial task of working "to mitigate existential risks facing humanity".

Pioneering physicist Hawking and co-founder of SpaceX Musk sit on the FLI’s scientific advisory board alongside actor Morgan Freeman*.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls," it said. Research on how to make AI systems robust and beneficial is "both important and timely", wrote the FLI.

"Our AI systems must do what we want them to do," it said.

The letter was accompanied by a research paper outlining some of the potential adverse affects of AI, including increased economic inequality and unemployment and greater security threats.

"For certain types of safety-critical AI systems - especially vehicles and weapons platforms - it may be desirable to retain some form of meaningful human control," it said.

However, its tone was considerably more upbeat than Hawking and Musk's repeatedly Cassandra-like warnings about the impact of AI. Last month Hawking told the BBC that "the development of true artificial intelligence could spell the end of the human race. In October Musk described it as our "biggest existential threat".

"The potential benefits are huge," added the letter. "Since everything that civilization has to offer is a product of human intelligence, we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide [and the] eradication of disease and poverty are not unfathomable." ®

* (We note that Mr Freeman did not sign the letter. Whose side is he on? – Sub-Ed).

Similar topics

TIP US OFF

Send us news


Other stories you might like