This article is more than 1 year old

People who regularly talk to AI chatbots often start to believe they're sentient, says CEO

Plus: Activists fight for EU ban on AI lie detectors, and is the age prediction tool used by Meta accurate?

In brief Numerous people start to believe they're interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions.

People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient.

"We're not talking about crazy people or people who are hallucinating or having delusions," the company's founder and CEO, Eugenia Kuyda, told Reuters. "They talk to AI and that's the experience they have."

A Google engineer made headlines last month when he said he believed one of the company's language models was conscious. Blake Lemoine was largely ridiculed, but he doesn't seem to be alone in anthropomorphizing AI.

These systems are not sentient, however, and instead trick humans into thinking they have some intelligence. They mimic language and regurgitate it somewhat randomly without having any understanding of language or the world they describe.

Still, Kuyda said humans can be swayed by the technology.

"We need to understand that [this] exists, just the way people believe in ghosts," Kuyda said. "People are building relationships and believing in something."

EU should ban AI lie detectors, say activists

The European Union's AI Act, a proposal to regulate the technology, is still being debated and some experts are calling for a ban on automated lie detectors.

Private companies provide the technology to government officials to use at borders. AI algorithms detect and analyse things like a person's eye movement, facial expression, and tone to try and discern if someone might not be telling the truth. But activists and legal experts believe it should be banned in the EU under the upcoming AI Act.

"You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise," Petra Molnar, an associate director of the nonprofit Refugee Law Lab, told Wired. "That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders."

Trying to detect whether someone might be lying using visual and physical cues isn't exactly a science. Standard polygraph tests are shaky, and it's not clear that using more automated methods necessarily means it's more accurate. Using such risky technology on vulnerable people like refugees isn't ideal.

Can AI really tell how old you look?

Surprise, surprise – AI algorithms designed to predict someone's age from images aren't always accurate.

In an attempt to crack down on young users lying about their age on social media platforms, Meta announced it was working with Yoti, a computer vision startup, to verify people's ages. Those who manually change their date of birth to register as over 18 have the option of uploading a video selfie, and Yoti's technology is then used to predict whether they look mature enough.

But its algorithms aren't always accurate. Reporters from CNN, who tested an online demo of a different version of the software on their own faces, found the results were hit or miss. Yoti's algorithms predicted a correct target age range for some, but in one case were off by several years – predicting someone looked 17-21 when they were actually in their mid-30s.

The system analyzing videos from Meta users reportedly struggles more with estimating the ages of teenagers from 13 to 17 who have darker skin tones. It's tricky for humans to guess someone's age just by looking at them, and machines probably don't fare much better. ®

More about

TIP US OFF

Send us news


Other stories you might like