It took Taylor Swift deepfake nudes to focus Uncle Sam, Microsoft on AI safety

Fakers gonna fake, fake, fake, fake, fake ... time to fake it off

Fake sexually explicit AI-generated viral images of pop royalty Taylor Swift have struck a nerve, leading fans, Microsoft's boss, and even the White House to call for immediate action to tackle deepfakes.

The X-rated images, to which The Register won't link, circulated online over the weekend and were published on Twitter, racking up at least tens of millions of views. The deepfakes have thrust the issue of non-consensual explicit AI deepfakes center stage as Swifties – many of whom flagged the images as inappropriate – were apparently disappointed to learn that there is no federal law prohibiting such content.

Our voices are our secret weapon, and our words are like power-ups in Fortnite

This incident came long after prior concern over the production of sexually explicit deepfakes featuring less famous women without their consent. Now that megastar Taylor Swift has been pulled into this quagmire, it's red alert all round.

White House press secretary Karine Jean-Pierre declared Congress "should take legislative action" to stamp out fake NSFW images. "We are alarmed by the reports of the … circulation of images that you just laid out – of false images to be more exact, and it is alarming," she told ABC News.

In the meantime, Jean-Pierre urged social media apps including X to take down the images and prevent them from spreading online.

"While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people," she added.

Twitter, or X, at one point blocked searches for Taylor Swift entirely to halt the flow of faked nudes, lifting the blockade last night. That hasn’t stopped the AI-generated deepfakes doing the rounds in chatrooms and on image boards. Microsoft CEO Satya Nadella – whose IT giant's text-to-image tool Designer may well have been used to create the bogus snaps – called the false images "alarming and terrible."

"We have to act," Nadella told NBC News, referring to guardrails that need to be put in place to prevent Designer from creating this kind of material.

"I think we all benefit when the online world is a safe world. And so I don't think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this."

Earlier this month, Joe Morelle (D-NY) and Tom Kean (R-NJ), members of the US House of Representatives, reintroduced the Preventing Deepfakes of Intimate Images Act. The bill aims to criminalize the creation and sharing of sexually explicit non-consensual AI pictures, with penalties of up to ten years in prison.

Morelle introduced the legislation accompanied by New Jersey teen Francesca Mani and her mother Dorota, who had been frustrated at being unable to get any support for preventing the use of sexually explicit deepfake images.

"Just because I'm a teenager doesn't mean my voice isn't powerful. Staying silent? Not an option. We are given voices to challenge, to speak up against the injustices we face. What happened to me and my classmates was not cool, and there's no way I'm just going to shrug and let it slide," argued Francesca.

"I'm here, standing up and shouting for change, fighting for laws so no one else has to feel as lost and powerless as I did on October 20. Our voices are our secret weapon, and our words are like power-ups in Fortnite. My mom and I are advocating to create a world where being safe isn't just a hope; it's a reality for everyone."

The bill was introduced to Congress in 2023 and referred to the House Committee on the Judiciary, although no action was taken at the time. ®

More about


Send us news

Other stories you might like