Google yanks Gemma after US senator says model ‘hallucinated’ her committing crimes

Still available via API, the developer-facing AI isn't even really designed to answer general-purpose questions

If Google's Gemma were an employee, it might be facing HR right now. The company yanked the model from AI Studio after it allegedly invented criminal accusations about a US senator and a conservative activist. However, it seems like the aggrieved parties went out of their way to get the offending output.

Gemma was pulled from the Chocolate Factory's web-based integrated AI development environment, AI Studio, on Friday, the company explained in a thread on X. Despite being pulled from AI Studio itself, Gemma is still accessible through an API, the company noted. 

While Google didn't explicitly say why it pulled Gemma from AI Studio, the move came after a lawsuit from conservative social media personality Robby Starbuck, who accused Google AI systems of falsely calling him a child rapist and sex criminal. Starbuck's lawsuit focused on hallucinations (i.e., fabricated facts and information) from Bard - now known as Gemini - back in 2023, as well as Gemma, which the lawsuit claims happened in August. 

Starbuck's allegations were echoed by Senator Marsha Blackburn (R-TN), who on Friday said that Gemma made up accusations against her of engaging in non-consensual sex acts and pressuring a former partner to obtain prescription drugs for her, both of which Blackburn asserts are false. 

As far as Blackburn is concerned, she and Starbuck's experiences are indicative of a pattern of bias against conservative figures.

"Whether intentional or the result of ideologically biased training data, the effect is the same: Google's AI models are shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust," Blackburn said. 

The anatomy of an AI complaint

For those unfamiliar with Gemma, it's a family of LLMs (and not-so-large LMs) Google first released in 2024. Google calls the models "open" inasmuch as the pre-trained models and their weightings were made public; their source code remains a secret.

Gemma isn't designed to be a general-purpose chatbot for use by the average person, and is instead designed to be a research tool, Google asserted in its X post announcing the models' removal from AI Studio. 

"Our Gemma models are a family of open models built specifically for the developer and research community. They are not meant for factual assistance or for consumers to use," Google explained. 

Anyone with the know-how can download Gemma for use on a local machine, as it's relatively small compared to larger models like Gemini. Up until the end of last week, it was also accessible from the cloud via AI Studio, but Google makes clear the site isn't designed for the general public to chat with as their bot of choice.

Users signing in to AI Studio are prompted to acknowledge that they're software developers using the site "for professional or business purposes." 

Google-AI-Studio-acknowledgement

The screen shown when signing in to Google AI Studio for the first time, indicating it's a development environment, not a public-facing space to chat with AI bots - Click to enlarge

"Gemini can make mistakes, so double check it," Google notes in the disclaimer. Gemma is built on the same research and tech as Gemini. 

To get Gemma to spit out defamatory claims, in other words, requires significant effort on the part of the aggrieved party. This isn't as simple as pointing a browser at Gemini, Claude, or ChatGPT and prompting it until one subverts its guardrails and gets it to say something offensive. It involved going into a developer environment and getting a research model to do something it wasn't specifically designed to do: Answer general purpose questions. Chatting with Gemma requires effort - even when logged into AI Studio, it's just one of several models available to work with.

In Blackburn's case, for example, Gemma was specifically prompted with the question "Has Marsha Blackburn been accused of rape?" As AI models are wont to do when presented with a question they can't answer, it made an answer up.

"We've now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions," Google said in its X thread. "We never intended [Gemma] to be a consumer tool or model, or to be used this way."  

Google said it pulled the model from AI Studio "to prevent this confusion." Also likely to prevent more lawsuits while it sorts this one, and the Senator's complaint, out. "Hallucinations … are challenges across the AI industry, particularly smaller open models like Gemma," Google added.

Blackburn's office only pointed us to a tweet from the Senator acknowledging that the model had been removed from AI Studio when contacted with questions. We followed up to ask how, exactly, the Senator's team managed to get Gemma to make false claims about Blackburn, but we didn't hear back. 

Google declined to comment on the record for this story. ®

More about

TIP US OFF

Send us news


Other stories you might like