This article is more than 1 year old

Facebook pulls plug on language-inventing chatbots? THE TRUTH

Far be it from us to lecture journos on overreacting but cripes – calm down

If you thought artificial intelligence was already overhyped to death, this week will have given you a heart attack.

On Monday, excitement levels among hacks hit the roof amid claims Facebook had scrambled to shut down its chatbots after they started inventing their own language.

Several publications called the programs “creepy.” Some journalists implied Facebook yanked the plug before, presumably, some kind of super-intelligence reared its head. The UK's Sun newspaper demanded to know: "Are machines taking over?" Australian telly channel Seven News even went as far as to call it an “artificial intelligence emergency.” Newsflash: it isn’t.

It’s just a pair of dumb bots exchanging bits of information with one another. Look at a snippet of their “conversation” – they are hardly speaking in a language, let alone developing one.

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

Skynet this ain't.

The explosion of AI terror headlines this week can be traced back to a Facebook project we covered in June. Researchers at the social network giant were trying to teach two agents to use dialogue to negotiate with one another. Specifically, the data scientists were trying to get the programs to barter over objects.

The goal was to train the bots to learn how to plan ahead and communicate effectively to get what they wanted. When they started spewing nonsense, no AI was shut down or killed. Instead, a software bug was found and fixed to get the bots to speak in a more human-like way so the researchers could decipher the results of their own research.

Zachary Lipton, an incoming assistant professor of machine learning at Carnegie Mellon University in the US, told The Register this week: “The work is interesting. But these are just statistical models, the same as those that Google uses to play board games or that your phone uses to make predictions about what word you’re saying in order to transcribe your messages. They are no more sentient than a bowl of noodles, or your shoes.”

Bots babbling away in tongues isn’t a new or mysterious phenomenon. Researchers from OpenAI found that agents would talk to one another in a kind-of Morse code when forced to communicate and work on a task together.

Ryan Lowe, a PhD student at McGill University and a research intern at OpenAI, told The Register that it’s a “very general phenomenon” and is “nothing to be concerned about.”

“Any time you have a multi-agent environment, it’s often more efficient for them to speak in Morse code," he said. "The tasks they face are very limited, and the language they use reflects that – it’s very simple and never language-like. Language has a well-defined syntax and grammar. If the rules aren’t explicitly taught to the agents then what comes out won’t necessarily be language-like either. Natural language does not emerge naturally.”

All aboard the hype trash train

Scaremongering and overselling of AI is rampant. There’s a lot of money to be made from rebranding your company as an “AI startup” and claiming your product uses "machine learning." It’s not just a problem of journalists sensationalizing copy – tech giants are offenders too, often using words like “imagination,” “intuition” and “reasoning” to describe their technology in blog posts.

“Some amount of the responsibility for this lies with the large corporate research labs that are aggressively promoting their breakthroughs in a way that most natural scientists would find unbecoming years ago,” says Lipton.

Facebook recently posted a job listing for an AI editor to help it “develop and execute on editorial strategy and campaigns focused on [its] advancements in AI.” Companies including Amazon, Microsoft, OpenAI, Google Brain and DeepMind all publish articles outlining the latest AI developments.

“The writers may be smart and reasonably well-intentioned," said Lipton. "But there’s a deep problem here. What they are doing is presenting something with the look and feel of journalism, but it isn’t journalism, and it makes little to no effort at being objective – or of taking a critical or skeptical eye to their patrons’ work.” ®

More about

More about

More about


Send us news

Other stories you might like