This article is more than 1 year old

US Senate approves deepfake bill to defend against manipulated media

Proposed legislation calls for research to detect synthetic shams online

On Wednesday, proposed US legislation to fund defenses against realistic computer-generated media known as deepfakes was approved by the US Senate and the bill now awaits consideration in the US House of Representatives.

Introduced last year by US Senators Catherine Cortez Masto (D-NV) and Jerry Moran (R-KS), the Identifying Outputs of Generative Adversarial Networks Act (IOGAN Act) aims to promote research to detect and defend against realistic-looking fakery that can be used for purposes of deception, harassment, or misinformation.

That's already happening. For example, security biz Sensity last month published a report about a deepfake-making bot on the Telegram messaging platform that has taken social media images of hundreds of thousands of real women and rendered them so the subjects appear to be naked. These images, the company says, could potentially be used for public-shaming or extortion attacks.

And the Congressional Research Service claims there's evidence that foreign intelligence agents have used deepfake photos for social media accounts used to attempt to recruit sources.

The issue has been a matter of concern among US lawmakers for several years, prompting hearings in 2019 and threats to hold online services accountable for failing to police deepfakes.

facebook_shock_648

Facebook's $500k deepfake-detector AI contest drama: Winning team disqualified on buried consent technicality

READ MORE

The bill, S.2904, directs the US National Science foundation to support research into "manipulated or synthesized content and information authenticity," specifically the content produced by AI systems known as Generative Adversarial Networks(GANs), such as deepfakes.

It also requires the National Institute of Standards and Technology (NIST) to develop ways to measure and assess deepfakes and to investigate public-private partnerships focused on detected synthesized or manipulated content.

Companies like Amazon, Facebook, and Microsoft, along with academic institutions, are already conducting related research through initiatives such as the Deepfake Detection Challenge.

In January, Facebook said it had revised its policies to support the removal of "misleading manipulated media," except where deemed to be parody or satire.

The US Defense Advanced Research Projects Agency (DARPA) has two such programs underway: Media Forensics (MediFor) and Semantic Forensics (SemaFor). The first, according to the Congressional Research Service [PDF], aims to develop algorithms "to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated."

The second focuses on developing algorithms to "automatically detect, attribute, and characterize (i.e., identify as either benign or malicious) various types of deep fakes."

There's also a related project that involves Georgia Tech researchers. They're trying to develop algorithms to generate synthetic, non-sensitive data that's related to real, sensitive information for GAN research, so scientists can analyze and test GANs without potential privacy problems.

The Congressional Budget Office has estimated that S.2904, if approved by the House (H.R.4355) and signed by the President, will cost $6m in the 2020-2025 period. ®

More about

TIP US OFF

Send us news


Other stories you might like