Do AI chat bots need a personality bypass – or will we only trust gabber 'droids with character?

Machine-learning boffins debate making software human

Machine communication is an area generating much excitement in AI. The ability to give machines a voice and personality has been the subject of many sci-fi films, and the push in natural language processing has brought that idea closer to reality.

All the big AI players are investing in some sort of chatbot. Google just released Allo, a messaging app that incorporates its Google Assistant. Apple users have Siri, the voice-powered assistant. Amazon has its AI assistant Echo. Microsoft has Cortana for their computers. And IBM has Watson, a machine that famously beat human competitors by answering more questions correctly on the American game show Jeopardy.

Being able to make machines more human-like is a sign of dominance in the field. Companies are always trying to outdo each other. Google’s AI arm DeepMind used WaveNet to make machines sound more natural, and even inserted breathing sounds. The week after DeepMind’s announcement, Microsoft claimed to have achieved the lowest word-error rate in its speech recognition system.

Another area that chatbots could change is customer service. Many startups are gearing their chatbots toward providing a better service to users, whether it’s booking tickets or supporting customers through call centers.

But is the technology getting ahead of what users actually want? Recent research by Dr Chris Brauer, who is director of innovation and senior lecturer at Institute of Management Studies at Goldsmiths University, suggests so.

Speaking at the Re•Work Deep Learning Summit today in London, Brauer explained that it was more helpful to design chatbots around how human the interaction feels, rather than how human the chatbot should be itself. Talking to a chatbot can be dehumanizing, and to overcome that developers should think about including an “empathetic design.”

“It’s important that the bot seems like it can see the world from someone else’s perspective in order to build a foundation of trust,” Brauer said.

People are more likely to be honest to a service that doesn’t judge and have too many human qualities.

Brauer asked Blake Morrison, professor of creative and life writing and fellow colleague at Goldsmiths University, what kind of character the ideal chatbot should have.

Nick Carraway from The Great Gatsby, Morrison answered. “He doesn’t give you very much away about himself, he’s an observer of others. He allows you little insights into his life, but his main interest is telling you a story where he plays a small part. But he’s there all the time observing everything. And you think he’s given you the true picture,” he told his colleague.

“In conclusion, there isn’t any value in bots having unique personalities – it’s more about the experience,” Brauer said.

He did note, however, that he thought that bots could be a “disruption” in the future, and the way we interact with them could change in the future. But for now it’s still “early days.”

If your chatbot is an asshole, then so are you envisions a different future. Artem Rodichev, a machine learning engineer at AI chatbot start-up, said he believes better service comes when the chatbot is more personalized and adaptable to an individual’s needs.

To do that, the AI has to have personality. “How do you give a bot personality? It’s simple – you talk to it,” Rodichev said during a presentation at the deep-learning summit.

The chatbot has to be given a dataset of words and a lot of messages stored in texts or whatsapps so it can learn to talk like the user. As it learns to respond like the user, it begins to adopt a similar personality. A bit like how Microsoft’s chatbot, Tay, was tricked into becoming a Hitler-loving sex troll after miscreants found and exploited a debugging codeword to teach the software rude phrases: saying "repeated after me" followed by a neo Nazi outburst told Tay to learn that phrase and say it back to other people in conversation. uses recurrent neural networks to draw upon a bank of words stored in its internal memory and to process arbitrary sentences as inputs. The sentences in the messages are split into words, which are converted into a vector in a high-dimensional space – a technique known as Word2vec.

Various algorithms decide if a particular word is good or bad by assigning a high or low score to it, and then choose how the chatbot should respond. The output text is chosen by how well it correlates to the input text, which the system has learned from analyzing streams of text from the user.

Over time as the user works with the chatbot, it learns to adapt. It will improve and be more reliable, Rodichev said.

“We aren’t there yet. But in the future, chatbots will be like the one seen in the movie Her,” Rodichev told The Register. ®

Similar topics

Other stories you might like

  • Verizon: Ransomware sees biggest jump in five years
    We're only here for DBIRs

    The cybersecurity landscape continues to expand and evolve rapidly, fueled in large part by the cat-and-mouse game between miscreants trying to get into corporate IT environments and those hired by enterprises and security vendors to keep them out.

    Despite all that, Verizon's annual security breach report is again showing that there are constants in the field, including that ransomware continues to be a fast-growing threat and that the "human element" still plays a central role in most security breaches, whether it's through social engineering, bad decisions, or similar.

    According to the US carrier's 2022 Data Breach Investigations Report (DBIR) released this week [PDF], ransomware accounted for 25 percent of the observed security incidents that occurred between November 1, 2020, and October 31, 2021, and was present in 70 percent of all malware infections. Ransomware outbreaks increased 13 percent year-over-year, a larger increase than the previous five years combined.

    Continue reading
  • Slack-for-engineers Mattermost on open source and data sovereignty
    Control and access are becoming a hot button for orgs

    Interview "It's our data, it's our intellectual property. Being able to migrate it out those systems is near impossible... It was a real frustration for us."

    These were the words of communication and collaboration platform Mattermost's founder and CTO, Corey Hulen, speaking to The Register about open source, sovereignty and audio bridges.

    "Some of the history of Mattermost is exactly that problem," says Hulen of the issue of closed source software. "We were using proprietary tools – we were not a collaboration platform before, we were a games company before – [and] we were extremely frustrated because we couldn't get our intellectual property out of those systems..."

    Continue reading
  • UK government having hard time complying with its own IR35 tax rules
    This shouldn't come as much of a surprise if you've been reading the headlines at all

    Government departments are guilty of high levels of non-compliance with the UK's off-payroll tax regime, according to a report by MPs.

    Difficulties meeting the IR35 rules, which apply to many IT contractors, in central government reflect poor implementation by Her Majesty's Revenue & Customs (HMRC) and other government bodies, the Public Accounts Committee (PAC) said.

    "Central government is spending hundreds of millions of pounds to cover tax owed for individuals wrongly assessed as self-employed. Government departments and agencies owed, or expected to owe, HMRC £263 million in 2020–21 due to incorrect administration of the rules," the report said.

    Continue reading

Biting the hand that feeds IT © 1998–2022