UK government pledges law against sexually explicit deepfakes

Not just making them, but sharing them too

The UK government has promised to make the creation and sharing of sexually explicit deepfake images a criminal offence.

It said the growth of artificially created but realistic images was alarming and caused devastating harm to victims, particularly the women and girls who are often the target.

The government promises to introduce a new offence, meaning perpetrators could be charged for both creating and sharing these images under the government's Crime and Policing Bill, which will be introduced when parliamentary time allows.

It will also create new offences for the taking of intimate images without consent while those installing equipment for the purpose of making intimate images without consent are also set to be covered by the law.

In a statement, victims minister Alex Davies-Jones said: "It is unacceptable that one in three women have been victims of online abuse. This demeaning and disgusting form of chauvinism must not become normalized.

"These new offences will help prevent people being victimized online. We are putting offenders on notice – they will face the full force of the law," she said.

A two-year jail term could apply to both criminals who take an intimate image without consent and those who install equipment for that purpose.

In a statement Baroness Jones, technology minister, said: "With these new measures, we're sending an unequivocal message: creating or sharing these vile images is not only unacceptable but criminal. Tech companies need to step up too - platforms hosting this content will face tougher scrutiny and significant penalties."

The Justice Ministry said sexually explicit deepfake offences are set to apply to images of adults, as the law already covers such images of children.

It is already an offence to share or threaten to share intimate images, including deepfakes, under the Sexual Offences Act 2003, following amendments that were made by the Online Safety Act 2023.

In September last year, some of the largest AI firms in America promised to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material.

Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl were among those making the non-binding commitments to the Biden administration.

Google's YouTube has also created privacy guidelines that allow people to request the removal of AI-generated videos that mimic them, the company said in July last year. ®

More about

TIP US OFF

Send us news


Other stories you might like