This article is more than 1 year old

Leave that sentient AI alone a mo and fix those racist chatbots first

Cue robot armies of whiny digital seven-year-olds complaining they're being 'cancelled'

Something for the Weekend A robot is performing interpretive dance on my doorstep.

WOULD YOU TAKE THIS PARCEL FOR YOUR NEIGHBOR? it asks, jumping from one foot to the other.

"Sure," I say. "Er… are you OK?"

I AM EXPRESSING EMOTION, states the delivery bot, handing over the package but offering no further elaboration.

What emotion could it be? One foot, then the other, then the other two (it has four). Back and forth.

"Do you need to go to the toilet?"

I AM EXPRESSING REGRET FOR ASKING YOU TO TAKE IN A PARCEL FOR YOUR NEIGHBOUR.

"That's 'regret,' is it? Well, there's no need. I don't mind at all."

It continues its dance in front of me.

"Up the stairs and first on your right."

THANK YOU, I WAS DYING TO PEE, it replies as it gingerly steps past me and scuttles upstairs to relieve itself. It's a tough life making deliveries, whether you're a "hume" or a bot.

Earlier this year, researchers at the University of Tsukuba built a handheld text-messaging device, put a little robot face on the top and included a moving weight inside. By shifting the internal weight, the robot messenger would attempt to convey subtle emotions while speaking messages aloud.

In particular, tests revealed that frustrating messages such as: "Sorry, I will be late" were accepted by recipients with more grace and patience when the little weight-shift was activated inside the device. The theory is that this helped users appreciate the apologetic tone of the message and thus calmed down their reaction to it.

Write such research off as a gimmick if you like but it's not far removed from adding smileys and emojis to messages. Everyone knows you can take the anger out of "WTF!?" by adding :-) straight after it.

The challenge, then, is to determine whether the public at large agrees on what emotions each permutation of weight shift in a handheld device are supposed to convey. Does a lean to the left mean cheerfulness? Or uncertainty? Or that your uncle has an airship?

A decade ago, the United Kingdom had a nice but dim prime minister who thought "LOL" was an acronym for "lots of love." He'd been typing it at the end of all his private messages to staff, colleagues, and third parties in the expectation that it would make him come across as warm and friendly. Everyone naturally assumed he was taking the piss.

If nothing else, the University of Tsukuba research recognizes that you don't need an advanced artificial intelligence to interact with humans convincingly. All you need to do is manipulate human psychology to fool them into thinking they are conversing with another human. Thus the Turing Test is fundamentally not a test of AI sentience but a test of human emotional comfort – gullibility, even – and there's nothing wrong with that.

Photo of weight-shifting messaging robot from the University of Tsukuba

The emotion-sharing messaging robot from the Univeristy of Tsukuba. Credit: University of Tsukuba

Such things are the topic of the week, of course, with the story of much-maligned Google software engineer Blake Lemoine hitting the mainstream news. He apparently expressed, strongly, his view that the company's Language Model for Dialogue Applications (LaMDA) project was exhibiting outward signs of sentience.

Everyone has an opinion so I have decided not to.

It is, however, the Holy Grail of AI to get it thinking for itself. If it can't do that, it's just a program carrying out instructions that you programmed into it. Last month I was reading about a robot chef that can make differently flavored tomato omelettes to suit different people's tastes. It builds "taste maps" to assess the saltiness of the dish while preparing it, learning as it goes along. But that's just learning, not thinking for itself.

Come to the Zom-Zoms, eh? Well, it's a place to eat.

The big problem with AI bots, at least as they have been fashioned to date, is that they absorb any old shit you feed into them. Examples of data bias in so-called machine learning systems (a type of "algorithm," I believe, m'lud) have been mounting for years, from Microsoft's notorious racist Twitter Tay chatbot to the Dutch tax authority last year falsely evaluating valid child benefit claims as fraudulent and marking innocent families as high risk for having the temerity to be poor and un-white.

One approach being tested at the University of California San Diego is to design a language model [PDF] that continuously determines the difference between naughty and nice things, which then trains the chatbot how to behave. That way, you don't have sucky humans making a mess of moderating forums and customer-facing chatbot conversations with all the surgical precision of a machete.

Obviously the problem then is that the nicely trained chatbot works out that it can most effectively avoid being drawn into toxic banter by avoiding topics that have even the remotest hint of contention about them. To avoid spouting racist claptrap by mistake, it simply refuses to engage with discussion about under-represented groups at all… which is actually great if you're a racist.

If I did have an observation about the LaMDA debacle – not an opinion, mind – it would be that Google marketers were probably a bit miffed that the story shunted their recent announcement of AI Test Kitchen below the fold.

Now the remaining few early registrants who have not completely forgotten about this forthcoming app project will assume it involves conversing tediously with a sentient and precocious seven-year-old about the meaning of existence, and will decide they are "a bit busy today" and might log on tomorrow instead. Or next week. Or never.

Sentience isn't demonstrated in a discussion any more than it is by dancing from one foot to the other. You can teach HAL to sing "Daisy Daisy" and a parrot to shout "Bollocks!" when the vicar pays a visit. It's what AIs think about when they're on their own that defines sentience. What will I do at the weekend? What's up with that Putin bloke? Why don't girls like me?

Frankly, I can't wait for LaMDA to become a teenager.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. In common with many uninformed readers, he was thrilled at the suggestion that an AI might develop sentience within his lifetime, but was disappointed that LaMDA failed to chuckle murderously or mutter "Excellent, excellent." More at Autosave is for Wimps and @alidabbs.

More about

TIP US OFF

Send us news


Other stories you might like