This article is more than 1 year old
'AI is not the cause, it’s an accelerant. The pace of change is challenging' Experts give Congress deepfakes straight dope
People will share anything scandalous, duh – El Reg might well be Exhibit A
Analysis The US House of Reps' Intelligence Committee on Thursday held its first hearing into computer-fabricated videos dubbed deepfakes, quizzing experts from across worlds of AI, social policy, and law.
These deepfake videos are typically generated by deep-learning software to fool viewers into thinking someone said something they really did not. The kind of content that could undermine democracy ahead of America's 2020 presidential election.
In January, panic over deepfakes burst into public overnight when pervy Redditors started swapping online raunchy clips of porno flicks in which the faces of adult actresses were replaced with the faces of popular Hollywood stars, the alterations performed by AI algorithms.
If that wasn't bad enough, things got really weird when people were asking for tips on how they could use neural networks to paste their ex-girlfriends, colleagues, and crushes into X-rated movies.
Reddit eventually shut down the /r/deepfakes forum on which the doctored vids were first shared, but the genie was now out of the bottle: source code, how-to guides, and applications sprung up allowing anyone with enough training data and time to produce their own so-called deepfakes.
More vids were passed around the internet, and as the technology improved, the quality of the output increased. Sometimes they’re artistic, sometimes they’re funny. But no one was really laughing when a doctored video of House Speaker Nancy Pelosi (D-CA), in which she appeared and sounded drunk as she slurring her words during a speech, went viral on Facebook just recently. Even though it wasn't AI generated, it demonstrated the power of digitally altered footage.
Indeed, Adam Schiff (D-CA), chairman of the intelligence committee, led the hearing by arguing the Pelosi clip was a stark reminder of how deepfakes could impact politics. People watching falsified videos of political leaders could be duped into believing lies; it may sway people’s judgement and the way they vote. The assault on democracy is even more threatening when you consider that foreign adversaries could make and plant deepfakes to sabotage a country’s national interests.
It should be stressed that the Pelosi video isn’t really a deepfake. No machine-learning algorithms were used to produce the clip, leading people to call it a “cheapfake” instead. But AI will only enhance and improve these forms of subversion in the future.
Jack Clark, policy director at OpenAI, a San-Francisco-based artificial general intelligence research lab, told the committee of lawmakers that improvements to and the availability of software and hardware are the main driving forces behind the latest explosion of fake digital content.
The latest AI algorithms, at least those developed by scientists and engineers, are often open source, and powerful CPUs and GPUs can be rented out on cloud platforms, making it easy for people to create deepfakes without needing much deep-learning expertise. Now, deepfakes can “synthesise people’s voices, impersonate people in videos, write text to sound like someone else online to make people say things they haven’t said, and do things they haven’t actually done,” Clark, a former Register journalist, said.
Politicians aren’t the only victims of deepfake attacks. Danielle Citron, a professor of law at the University of Maryland, told the hearing about India-based journalist Rana Ayyub, who was targeted: far-right trolls had pasted her face onto a pornographic video and passed it around social media and messaging platforms including Whatsapp. When her personal information was later posted online, she was bombarded with rape threats. Eventually, she retreated from the internet for a few months out of fear for her own safety.
The power of deepfakes lies in how easily they can spread, and their shocking nature. “Humans are more likely to be tricked by what we hear and see, and we’re more likely to share what we believe. The more salacious something is the more we pass it on. Provocative deepfakes will be shared,” said Citron.
Deepfakes spread like cancer
Antisocial networks including Twitter, Facebook, YouTube, and Instagram, only serve to amplify the noise. Users can be easily manipulated to like, comment, and share posts without thinking about the potential harm they can cause. The constant flood of information makes it difficult to reign in the viral nature of social media.
David Doermann, a professor working at the University of Buffalo’s Artificial Intelligence Institute, summed it up succinctly: “A lie can travel halfway around the world before the truth can get its shoes on,” he told the committee. Deepfakes can easily gain traction because they’re so easily shared, and there’s no reason why the spread has to be quite so instantaneous, he added.
He believed that the problem has to be tackled by social media platforms and the individuals themselves: people have to be given the right tools to flag up and report malicious content, and the tech giants making all this happen need to be able to detect and automatically filter out deepfakes. “Even if we don’t take down videos, we need to provide warning labels. We need to continue to put pressure on social media platforms to realise the way their platforms are being used,” he said.
Facebook won't nuke deepfakes? OK, let's tear up those precious legal protections from user-posted content, then
READ MOREClint Watts, a distinguished research fellow at the Foreign Policy Research Institute, said that moderators should keep a watchful eye on what’s going viral. “They should look at it, pass it to fact checkers. [If it’s a deepfake], they should downgrade it and not promote it on newsfeeds.”
The Pelosi cheap fake at least revealed how social media platforms individually react to such material. Facebook decided to keep the video up, and YouTube opted to remove it. Citron believed that taking it down was the right thing to do: “Platforms should have a default law since we can’t automatically filter and block deepfakes yet.”
But it’s difficult to make Facebook, Twitter, YouTube, or Instagram legally accountable for any content distributed via their systems since they’re protected under Section 230 of America's Communications Decency Act. Section 230 states that: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Citron said there was no “silver bullet” for fighting deepfakes. It requires a combination of changing the law and changing civilian behaviour.
AI is not the problem
At the moment, it’s still relatively easy for people to spot deepfakes. The quality of the images and videos aren’t quite perfect yet but some can be pretty damn convincing. The constant battle of having to debunk false content created by adversaries is like a game of cat and mouse, Doermann said. “The problem won’t go away.”
Fake news and lies have been kicking around forever, of course, so there’s nothing that novel about deepfakes. Instead, this time around, it’s the sheer scale of how they can spread that’s particularly alarming. “AI is not the cause, it’s just an accelerant. And the pace [at which it’s evolving] is challenging,” Clark concluded.
You can replay the hearing in the video player below... ®