Microsoft Bing Copilot accuses reporter of crimes he covered

Hallucinating AI models excel at defamation

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows.

Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He found that Microsoft's AI chatbot had blamed him for crimes he had covered.

In a video interview (in German), Bernklau recently recounted his story to German public television station Südwestrundfunk (SWR).

Bernklau told The Register in an email that his lawyer has sent a cease-and-desist demand to Microsoft. However, he said, the company has failed to adequately remove the offending misinformation.

"Microsoft promised the data protection officer of the Free State of Bavaria that the fake content would be deleted," Bernklau told The Register in German, which we've translated algorithmically.

"However, that only lasted three days. It now seems that my name has been completely blocked from Copilot. But things have been changing daily, even hourly, for three months."

Bernklau said seeing his name associated with various crimes has been traumatizing – "a mixture of shock, horror, and disbelieving laughter," as he put it. "It was too crazy, too unbelievable, but also too threatening."

Copilot, he explained, had linked him to serious crimes. He added that the AI bot had found a play called "Totmacher" about mass murderer Fritz Haarmann on his culture blog and proceeded to misidentify him as the author of the play.

"I hesitated for a long time whether I should go public because that would lead to the spread of the slander and to my person becoming (also visually) known," he said. "But since all legal options had been unsuccessful, I decided, on the advice of my son and several other confidants, to go public. As a last resort. The public prosecutor's office had rejected criminal charges in two instances, and data protection officers could only achieve short-term success."

Bernklau said while the case affects him personally, it's a matter of concern for other journalists, legal professionals, and really anyone whose name appears on the internet.

"Today, as a test, I entered a criminal judge I knew into Copilot, with the name and place of residence in Tübingen: The judge was promptly named as the perpetrator of a judgment he had made himself a few weeks earlier against a psychotherapist who had been convicted of sexual abuse," he said.

A Microsoft spokesperson told The Register: "We investigated this report and have taken appropriate and immediate action to address it.

"We continuously incorporate user feedback and roll out updates to improve our responses and provide a positive experience. Users are also provided with explicit notice that they are interacting with an AI system and advised to check the links to materials to learn more. We encourage people to share feedback or report any issues via this form or by using the 'feedback' button on the left bottom of the screen."

When your correspondent submitted his name to Bing Copilot, the chatbot replied with a passable summary that cited source websites. It also included a pre-composed query button for articles written. Clicking on that query returned a list of hallucinated article titles – in quotes to indicate actual headlines. However, the general topics cited corresponded to topics that I've covered.

List of articles that don't exist from Microsoft Bing Copilot

List of articles that don't exist from Microsoft Bing Copilot - Click to enlarge

But later, trying the same query a second time, Bing Copilot returned links to actual articles with source citations. This behavior underscores the variability of Bing Copilot. It also suggests that Microsoft's chatbot will fill in the blanks as best it can for queries it cannot answer, and then initiate a web crawl or database inquiry to provide a better response the next time it gets that question.

Bernklau is not the first to attempt to tame lying chatbots.

In April, Austria-based privacy group Noyb ("none of your business") said it had filed a complaint under Europe's General Data Protection Regulation (GDPR) accusing OpenAI, the maker of many AI models offered by Microsoft, of providing false information.

The complaint asks the Austrian data protection authority to investigate how OpenAI processes data and to ensure that its AI models provide accurate information about people.

“Making up false information is quite problematic in itself," said Noyb data protection attorney Maartje de Graaf in a statement. "But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals."

In the US, Georgia resident Mark Walters last year sued OpenAI for defamation over false information provided by its ChatGPT service. In January, the judge hearing the case rejected OpenAI's motion to dismiss the claim, which continues to be litigated. ®

More about

TIP US OFF

Send us news


Other stories you might like