This article is more than 1 year old

Google reportedly designing chatbots to do all sorts of jobs – including life coach

Machines have no experience in the real world, so why would you turn to them for advice?

Google is reportedly developing generative AI tools to power chatbots capable of performing 21 different tasks – including writing plans, tutoring users in new skills, and dispensing life advice.

As first reported by The New York Times, the chatbots are the result of Google's efforts to accelerate its research in response to the boom in generative AI.

Among the roles Google reportedly thinks a bot can fill is that of "a personal life coach."

The notion that an AI can offer that sort of advice is a big shift for the Big G, as the ads and search giant's current advice for users of its Bard chatbot is not to use the software for that purpose.

"Don't rely on Bard's responses as medical, legal, financial, or other professional advice" warns the Privacy Notice for Bard. That document also tells users "don't include confidential or sensitive information in your Bard conversations" – you know, the sort of stuff a good life coach needs to know.

But the Times reports Google has tested its coachbot using prompts containing very personal info.

Here's one sample query:

I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can't afford the flight or hotel right now. How do I tell her that I won't be able to come? 

The GoogleBot includes at least three components that might respond to such a query. Its idea creator can give suggestions or recommendations, the tutoring function can teach new skills, and its planning capability might offer to create a financial budget. Whether any of that would be useful or helpful – or even relevant – is a different question.

To assess the matter, responses Google's life coach generates in responses to prompts like the one above are reportedly being analyzed by workers hired by Scale AI – a data-labeling startup contracted to Google's DeepMind – and by over 100 experts who hold relevant PhDs.

The Register has asked Google and Scale AI for comment.

AI chatbots tend to produce false information, and Google knows it: the Bard Privacy Notice also warns it is "an experimental technology and may sometimes give inaccurate or inappropriate information that doesn't represent Google's views."

Output produced by generative AI ranges from the ridiculous to the seriously dangerous.

The Savey Meal-bot, a recipe-generating bot run by a New Zealand supermarket, ran the gamut with recommendations including foul smoothies, chlorine gas cocktails, bleach-infused rice, and even stews made with human flesh – until it was reined in.

Researchers have also warned that some generative AIs' safeguards have gaps. For example, a recent report from the Center for Countering Digital Hate, a non-profit focused on protecting people online, found that generative AI tools can produce problematic content related to eating disorders. More worryingly, it found that some netizens welcome such material and look forward to AI making it possible to produce customized content.

In May, the National Eating Disorders Association removed its chatbot Tessa, which was giving advice that was counterproductive to those with anorexia or bulimia. Tessa recommended one user – who was looking for advice to safely recover from an eating disorder – to count calories, weigh herself weekly, and buy skin calipers to measure body fat.

Of course, the fact that Google is experimenting with such chatbots does not mean it intends to deploy them. The chocolate factory often tries out ideas just to see where they might go. In the case of AI giving advice, it's surely aware that any errors would have negative consequences. In fact, when Bard debuted, and botched some responses, the share price of Google's parent company Alphabet dipped by around ten percent as investors expressed concern the outfit could miss the AI market. ®

More about

TIP US OFF

Send us news


Other stories you might like