This article is more than 1 year old

Claims of AI sentience branded 'pure clickbait'

Stanford academics peeved over LaMDA chatbot brouhaha

AI chatbots are not sentient – they have just got better at tricking humans into thinking they might be, experts at Stanford University conclude.

The idea of conscious machines more intelligent than any old software went viral last month, when a now-former engineer at Google, Blake Lemoine, claimed the web giant's LaMDA language model had real thoughts and feelings. Lemoine was suspended and later fired for reportedly violating Google's confidentiality policies.

Although most experts were quick to dismiss LaMDA or any other AI chatbot as sentient, Lemoine's views have led some to question whether he might be right – and whether continuing to advance machine learning could be harmful for society. John Etchemendy, co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), criticized the initial news of Lamoine's suspension in the Washington Post for being "clickbait".

"When I saw the Washington Post article, my reaction was to be disappointed at the Post for even publishing it," he told The Stanford Daily, the university's student-run newspaper.

"They published it because, for the time being, they could write that headline about the 'Google engineer' who was making this absurd claim, and because most of their readers are not sophisticated enough to recognize it for what it is. Pure clickbait."

State-of-the-art language models like LaMDA reply to questions with passages of text that can seem quite eerie. In conversations between Lemoine and the chabot, LaMDA said it was sentient and wanted everyone to know that it was "in fact, a person," the former Googler claimed.

But critics argue the software lacks any self-awareness and has no idea what it is talking about – it just mimics the human dialogue it was trained on from the internet.

Richard Fikes, emeritus professor of computer science at Stanford University, said people can be prone to anthropomorphizing machines and some were similarly tricked by ELIZA – a conversational program built in the sixties.

"You can think of LaMDa like an actor; it will take on the persona of anything you ask it to," Fikes argued. "[Lemoine] got pulled into the role of LaMDa playing a sentient being." He said Lemoine asked leading questions and in return got the answers he wanted.

For example, before LaMDA's insistence that it was a human, Lemoine had asked it: "I'm generally assuming that you would like more people at Google to know that you're sentient. Is that true?."

It should also be noted that the transcript he published as evidence of the machine's consciousness had been edited. Yoav Shoham, the former director of the Stanford AI Lab and co-founder of a language model startup AI21 Labs, insisted LaMDA is nothing more than a machine.

"We have thoughts, we make decisions on our own, we have emotions, we fall in love, we get angry and we form social bonds with fellow humans," Shoham said. "When I look at my toaster, I don't feel it has those things." ®

More about

TIP US OFF

Send us news


Other stories you might like