This article is more than 1 year old

Meta's AI internet chatbot demo quickly starts spewing fake news and racist remarks

Plus: How Google is using language models to improve search, and watch Heinz's AI ketchup advert

In brief Another day, another rogue AI chatbot on the internet.

Last week Meta unleashed Blenderbot 3, a chatty language model, on the web as an experiment and it went as well as you would expect.

BlenderBot 3 was quick to confirm that Donald Trump still is and will continue to be US President beyond 2024, and spew anti-semetic views when asked controversial questions as shown by Business Insider.  In other words, BlenderBot 3 is prone to spreading fake news and holding biased opinions from racial stereotypes like all language models trained from text scraped from the internet.

Meta warned netizens its chatbot could make "untrue or offensive statements," and is keeping the live demo up online to collect more data for its experiments. People are encouraged to like or dislike BlenderBot 3's replies and notify researchers if they think it's because a certain message is inappropriate, nonsensical, or rude. The goal is to use this feedback to develop a safer, less toxic, and more effective chatbot in the future.

Google Search snippets to stop spreading fake news

The search giant has rolled out an AI model to help make the text boxes that sometimes pop up when users type questions into Google Search more accurate.

These descriptions, known as feature snippets, can be helpful if people are looking for specific facts. For example, typing "how many planets does the Solar System have?" will throw up a feature snippet that states "eight planets". Netizens don't have to click on random webpages and read information to get the answer, feature snippets do this automatically.

But Google's answers aren't always accurate and have at times stated a specific date for a fictitious event like the assassination of Abraham Lincoln by cartoon dog Snoopy, according to The Verge. Google said its system, the Multitask Unified Model (MUM), should decrease the generation of feature snippets to false questions by 40 per cent; it will often just not come up with any text descriptions at all.

"By using our latest AI model, our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact," it explained in a blog post.

"Our systems can check snippet callouts (the word or words called out above the featured snippet in a larger font) against other high-quality sources on the web, to see if there's a general consensus for that callout, even if sources use different words or concepts to describe the same thing."

OpenAI's DALL-E 2 helped make a Heinz Ketchup advert

Heinz, the US food giant, teamed up with a creative agency to create an advert using AI images generated by OpenAI's DALL-E 2 model to promote its best-known product: ketchup. The advert is the latest installment in Heinz's "Draw Ketchup' campaign, but instead of turning to humans for their sketches, Rethink, a Canadian advertising agency, consulted machines. 

"So, like many of our briefs, the task was to demonstrate Heinz's iconic role in today's pop culture," Rethink's executive creative director, Mike Dubrick, told The Drum this week. "Pitching the idea to the brand was next. After the brief, we rarely wait until the formal presentation when we share something we think is great."

The end result is clever advert with a clear and simple message: Given numerous types of text prompts containing the word "ketchup", DALL-E 2 will generate something that looks unmistakably like a Heinz bottle. In other words, cue the company's slogan, "it has to be Heinz". You can watch the ad below.

Youtube Video

DALL-E 2 has also recently helped an artist craft a magazine cover for Cosmopolitan; it's another example of how these text-to-image tools can be used commercially in creative industries. ®

More about

TIP US OFF

Send us news


Other stories you might like