Facebook pulls plug on language-inventing chatbots? THE TRUTH

Far be it from us to lecture journos on overreacting but cripes – calm down


If you thought artificial intelligence was already overhyped to death, this week will have given you a heart attack.

On Monday, excitement levels among hacks hit the roof amid claims Facebook had scrambled to shut down its chatbots after they started inventing their own language.

Several publications called the programs “creepy.” Some journalists implied Facebook yanked the plug before, presumably, some kind of super-intelligence reared its head. The UK's Sun newspaper demanded to know: "Are machines taking over?" Australian telly channel Seven News even went as far as to call it an “artificial intelligence emergency.” Newsflash: it isn’t.

It’s just a pair of dumb bots exchanging bits of information with one another. Look at a snippet of their “conversation” – they are hardly speaking in a language, let alone developing one.

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

Skynet this ain't.

The explosion of AI terror headlines this week can be traced back to a Facebook project we covered in June. Researchers at the social network giant were trying to teach two agents to use dialogue to negotiate with one another. Specifically, the data scientists were trying to get the programs to barter over objects.

The goal was to train the bots to learn how to plan ahead and communicate effectively to get what they wanted. When they started spewing nonsense, no AI was shut down or killed. Instead, a software bug was found and fixed to get the bots to speak in a more human-like way so the researchers could decipher the results of their own research.

Zachary Lipton, an incoming assistant professor of machine learning at Carnegie Mellon University in the US, told The Register this week: “The work is interesting. But these are just statistical models, the same as those that Google uses to play board games or that your phone uses to make predictions about what word you’re saying in order to transcribe your messages. They are no more sentient than a bowl of noodles, or your shoes.”

Bots babbling away in tongues isn’t a new or mysterious phenomenon. Researchers from OpenAI found that agents would talk to one another in a kind-of Morse code when forced to communicate and work on a task together.

Ryan Lowe, a PhD student at McGill University and a research intern at OpenAI, told The Register that it’s a “very general phenomenon” and is “nothing to be concerned about.”

“Any time you have a multi-agent environment, it’s often more efficient for them to speak in Morse code," he said. "The tasks they face are very limited, and the language they use reflects that – it’s very simple and never language-like. Language has a well-defined syntax and grammar. If the rules aren’t explicitly taught to the agents then what comes out won’t necessarily be language-like either. Natural language does not emerge naturally.”

All aboard the hype trash train

Scaremongering and overselling of AI is rampant. There’s a lot of money to be made from rebranding your company as an “AI startup” and claiming your product uses "machine learning." It’s not just a problem of journalists sensationalizing copy – tech giants are offenders too, often using words like “imagination,” “intuition” and “reasoning” to describe their technology in blog posts.

“Some amount of the responsibility for this lies with the large corporate research labs that are aggressively promoting their breakthroughs in a way that most natural scientists would find unbecoming years ago,” says Lipton.

Facebook recently posted a job listing for an AI editor to help it “develop and execute on editorial strategy and campaigns focused on [its] advancements in AI.” Companies including Amazon, Microsoft, OpenAI, Google Brain and DeepMind all publish articles outlining the latest AI developments.

“The writers may be smart and reasonably well-intentioned," said Lipton. "But there’s a deep problem here. What they are doing is presenting something with the look and feel of journalism, but it isn’t journalism, and it makes little to no effort at being objective – or of taking a critical or skeptical eye to their patrons’ work.” ®

Similar topics


Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022