This article is more than 1 year old

Do AI chat bots need a personality bypass – or will we only trust gabber 'droids with character?

Machine-learning boffins debate making software human

Machine communication is an area generating much excitement in AI. The ability to give machines a voice and personality has been the subject of many sci-fi films, and the push in natural language processing has brought that idea closer to reality.

All the big AI players are investing in some sort of chatbot. Google just released Allo, a messaging app that incorporates its Google Assistant. Apple users have Siri, the voice-powered assistant. Amazon has its AI assistant Echo. Microsoft has Cortana for their computers. And IBM has Watson, a machine that famously beat human competitors by answering more questions correctly on the American game show Jeopardy.

Being able to make machines more human-like is a sign of dominance in the field. Companies are always trying to outdo each other. Google’s AI arm DeepMind used WaveNet to make machines sound more natural, and even inserted breathing sounds. The week after DeepMind’s announcement, Microsoft claimed to have achieved the lowest word-error rate in its speech recognition system.

Another area that chatbots could change is customer service. Many startups are gearing their chatbots toward providing a better service to users, whether it’s booking tickets or supporting customers through call centers.

But is the technology getting ahead of what users actually want? Recent research by Dr Chris Brauer, who is director of innovation and senior lecturer at Institute of Management Studies at Goldsmiths University, suggests so.

Speaking at the Re•Work Deep Learning Summit today in London, Brauer explained that it was more helpful to design chatbots around how human the interaction feels, rather than how human the chatbot should be itself. Talking to a chatbot can be dehumanizing, and to overcome that developers should think about including an “empathetic design.”

“It’s important that the bot seems like it can see the world from someone else’s perspective in order to build a foundation of trust,” Brauer said.

People are more likely to be honest to a service that doesn’t judge and have too many human qualities.

Brauer asked Blake Morrison, professor of creative and life writing and fellow colleague at Goldsmiths University, what kind of character the ideal chatbot should have.

Nick Carraway from The Great Gatsby, Morrison answered. “He doesn’t give you very much away about himself, he’s an observer of others. He allows you little insights into his life, but his main interest is telling you a story where he plays a small part. But he’s there all the time observing everything. And you think he’s given you the true picture,” he told his colleague.

“In conclusion, there isn’t any value in bots having unique personalities – it’s more about the experience,” Brauer said.

He did note, however, that he thought that bots could be a “disruption” in the future, and the way we interact with them could change in the future. But for now it’s still “early days.”

If your chatbot is an asshole, then so are you

Luka.ai envisions a different future. Artem Rodichev, a machine learning engineer at AI chatbot start-up Luka.ai, said he believes better service comes when the chatbot is more personalized and adaptable to an individual’s needs.

To do that, the AI has to have personality. “How do you give a bot personality? It’s simple – you talk to it,” Rodichev said during a presentation at the deep-learning summit.

The chatbot has to be given a dataset of words and a lot of messages stored in texts or whatsapps so it can learn to talk like the user. As it learns to respond like the user, it begins to adopt a similar personality. A bit like how Microsoft’s chatbot, Tay, was tricked into becoming a Hitler-loving sex troll after miscreants found and exploited a debugging codeword to teach the software rude phrases: saying "repeated after me" followed by a neo Nazi outburst told Tay to learn that phrase and say it back to other people in conversation.

Luka.ai uses recurrent neural networks to draw upon a bank of words stored in its internal memory and to process arbitrary sentences as inputs. The sentences in the messages are split into words, which are converted into a vector in a high-dimensional space – a technique known as Word2vec.

Various algorithms decide if a particular word is good or bad by assigning a high or low score to it, and then choose how the chatbot should respond. The output text is chosen by how well it correlates to the input text, which the system has learned from analyzing streams of text from the user.

Over time as the user works with the chatbot, it learns to adapt. It will improve and be more reliable, Rodichev said.

“We aren’t there yet. But in the future, chatbots will be like the one seen in the movie Her,” Rodichev told The Register. ®

More about

TIP US OFF

Send us news


Other stories you might like