UK data watchdog warns Snap over My AI chatbot privacy issues

Plus: 4channers are making troll memes with Bing AI, and more

AI in brief Snap is in hot water with the UK's Information Commissioner's Office (ICO) over privacy risks in My AI, its chatbot aimed at teenagers.

Regulators issued a preliminary enforcement notice to Snap, outlining the ways in which the data generated by its young users don't seem adequately protected, and has given the company a chance to respond.

"The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching 'My AI'," Information Commissioner John Edwards said in a statement.

"We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights."

The ICO will decide whether to enact a final enforcement notice, which could force Snap to take down My AI in the UK and stop harvesting any data from its Snapchat app users. In a statement to TechCrunch, a spokesperson representing the company said it was "closely reviewing" the notice.

"Like the ICO we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they're comfortable with our risk assessment procedures," it said.

4channers use Bing AI to create and spread troll propaganda online

Netizens on controversial internet forum 4chan are encouraging one another to create and post offensive AI memes on the internet.

4chan users have been sharing guides on how to generate the false images, advising others to create "funny" and "provocative" messages, 404 first reported. The easiest method they suggested was using Bing's new DALL-E 3 feature. "Most people are using DALL-E 3 at the link below," one post read, which described the tool as a "QUICK METHOD."

Unsurprisingly, internet trolls have produced all sorts of ridiculous and offensive images, depicting what looks like a Jewish man flying into the Twin Towers, black male immigrants chasing a white woman with a knife, and more.

Microsoft rolled out DALL-E 3 this week, and the misuse from 4chan users shows the tool's content moderation abilities aren't as effective as it would like them to be. OpenAI blamed this on Microsoft and told investigative news outlet Bellingcat in a statement: "Microsoft implements their own safety mitigations for DALL-E 3."

A Microsoft spokesperson said: "We have large teams working on the development of tools, techniques, and safety systems that are aligned with our responsible AI principles. As with any new technology, some are trying to use it in ways that were not intended, which is why we are implementing a range of guardrails and filters to make Bing Image Creator a positive and helpful experience for users."

Meta's naughty AI-generated stickers

Meta's new feature allowing Facebook users to create their own generative AI stickers has raised eyebrows this week.

Some early testers took joy in posting their inappropriate, bizarre, or lewd AI creations online, including cartoon characters with breasts, Canadian Prime Minister Justin Trudeau's butt, male genitalia, and more, The Verge reported.

The stickers are created by combining the capabilities of Meta's Llama 2 large language model and its image generation system Emu. Users can type in a text-based description of the image they want to generate. Meta believes this will be fun for users, who can then share their custom stickers with their friends or post them on social media.

It's not really surprising that the new feature isn't perfect. Content filters aimed at preventing netizens creating potentially offensive images using AI aren't foolproof; it's easy to find ways around them by being clever with your prompts.

Thankfully, the AI sticker tool coming to Facebook, Instagram, and Whatsapp isn't generally available yet. Meta has time to iron out some of the kinks in its product before it's fully unleashed on the internet.

Google's virtual Bard assistant is coming to smartphones

Google is rolling out a virtual AI assistant powered by its chatbot Bard as an app coming to iOS and Android devices.

"Over the last seven years, Google Assistant has helped hundreds of millions of people get things done in a natural, conversational way – setting alarms, asking for weather updates or making quick calls – all with a simple 'Hey Google'," it said in a blog post.

"Now, generative AI is creating new opportunities to build a more intuitive, intelligent, personalized digital assistant. One that extends beyond voice, understands and adapts to you and handles personal tasks in new ways."

Named Assistant with Bard, the app combines Bard's generative AI abilities with the Google Assistant's abilities to carry out actions. Together it can do more tasks than just making phone calls or playing a specific song, for example. Google envisions it being used to help people plan trips, search for emails, create grocery lists, or send texts.

Users will be able to interact with Assistant with Bard in different ways including voice, text, and images. It will also integrate with other Google apps too, like Gmail and Docs, so it can perform tasks within those domains, such as writing emails or drafting documents. Google said Assistant with Bard is still in its experimental phase and being tested right now, but expects to roll it out over the next few months. ®

More about


Send us news

Other stories you might like