AI giants pinky swear (again) not to help make deepfake smut

Oh look, another voluntary, non-binding agreement to do better

Some of the largest AI firms in America have given the White House a solemn pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material.

Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl each made non-binding commitments to safeguard their products from being misused to generate abusive sexual imagery, the Biden administration said Thursday.

"Image-based sexual abuse ... including AI-generated images – has skyrocketed," the White House said, "emerging as one of the fastest growing harmful uses of AI to date."

According to the White House, the six aforementioned AI orgs all "commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse."

Two other commitments lack Common Crawl's endorsement. Common Crawl, which harvests web content and makes it available to anyone who wants it, has been fingered previously as vacuuming up undesirable data that's found its way into AI training data sets.

However, Common Crawl shouldn't be listed alongside Adobe, Anthropic, Cohere, Microsoft, and OpenAI regarding their commitments to incorporate "feedback loops and iterative stress-testing strategies... to guard against AI models outputting image-based sexual abuse" as Common Crawl doesn't develop AI models.

The other commitment to remove nude images from AI training datasets "when appropriate and depending on the purpose of the model" seems like one Common Crawl should have agreed to, but it doesn't collect images.

According to the nonprofit, "the [Common Crawl] corpus contains raw web page data, metadata extracts, and text extracts," so it's not clear what it would have to remove under that provision.

When asked why it didn't sign those two provisions, Common Crawl Foundation executive director Rich Skrenta told The Register his organization supports the broader goals of the initiative, but was only ever asked to sign on to the one provision.

"We weren't presented with those three options when we signed on," Skrenta told us. "I assume we were omitted from the second two because we do not do any model training or produce end-user products ourselves."

The (lack of) ties that (don't) bind

This is the second time in a little over a year that big-name players in the AI space have made voluntary concessions to the Biden administration, and the trend isn't restricted to the US.

In July 2023, Anthropic, Microsoft, OpenAI, Amazon, Google, Inflection, and Meta all met at the White House and promised to test models, share research, and watermark AI-generated content to prevent it being misused for things like non-consensual deepfake pornography.

There's no word on why some of those other companies didn't sign yesterday's pledge, which, like the one from 2023, was also voluntary and non-binding.

It's similar to agreements signed in the UK last November between several countries over an AI safety pact, which was followed by a deal in South Korea in May between 16 companies that agreed to pull the plug if a machine-learning system showed signs of being too dangerous. Both agreements are lofty and, like those out of the White House, entirely non-binding.

Deepfakes continue to proliferate, targeting both average citizens and international superstars alike. Experts, meanwhile, are more worried than ever about AI deepfakes and misinformation ahead of one of the largest global election years in modern history.

The EU has approved far more robust AI policies than the US, where AI companies seem more likely to lobby against formal regulation, while receiving aid from some elected officials and support for light-touch regulation.

The Register has asked the White House about any plans for enforceable AI policy. In the meantime, we'll just have to wait and see how more voluntary commitments play out. ®

More about

TIP US OFF

Send us news


Other stories you might like