US lawmakers have warned they may revisit American tech corporations' blanket legal protections – specifically, the ones that shield internet giants from the fallout of user-posted content – in order to tackle the rise of deepfakes.
A House Intelligence Committee hearing on Thursday dug into the issue of digitally altered footage that is so convincing, people have a hard time realizing it is fabricated. These maliciously manipulated videos tend to be known as deepfakes because they typically use deep-learning software.
A panel of legal and artificial intelligence experts told the House committee that the main outlets for such videos – antisocial media networks like Facebook, Instagram, and YouTube – were legally exempt from any liability regarding these damaging fakes thanks to Section 230 of the Communications Decency Act.
One of those experts, Danielle Citron, professor at University of Maryland Francis King Carey School of Law, argued that Section 230 should be amended to force companies like Facebook to adopt "reasonable content moderation" in order to retain that legal protection. And at least one lawmaker – committee chairman Adam Schiff (D-CA) – said he was open to the idea.
FYI: There's now an AI app that generates convincing fake smut vids using celebs' facesREAD MORE
Section 230 shields websites from any legal action arising from content posted by their users, provided certain rules and caveats are observed. It has long been held sacrosanct by tech companies, who have lobbied fiercely against any changes to it fearing that it would open the floodgates to lawsuits. But while the blanket legal protection it provides have been enormously helpful in the rise of online platforms like Twitter, Facebook and eBay, as well as information sites like Wikipedia, it has also prevented tech giants from addressing issues of serious concern.
The issue of deepfakes has blown up in a political argument recently after a doctored video of speaker of the House, Nancy Pelosi, was posted to Facebook and appeared to show her slurring. The video was promoted by President Donald Trump. That led to calls for Facebook to remove the video: something that the tech giant refused to do.
In response, an advertising agency produced a deepfake of Facebook CEO Mark Zuckerberg on Facebook-owned Instagram saying a series of disturbing things about what the company can do with all the data it has – effectively daring Facebook to intervene.
But the issue of deepfakes is more significant than cheap shots and publicity seeking, the Congressional hearing heard this morning: it could have severe and dangerous repercussions across a whole range of sectors, including politics and business.
The hearing considered the idea of a fake video of a CEO saying things designed to disrupt a share price – such as during an IPO launch. Or bogus videos used to defame presidential candidates. Such efforts could undermine the public's faith in democratic institutions and in the media, the panel of experts warned.
And, inserting himself into the maelstrom as only he is able to do, Donald Trump said this week that he would welcome foreign intervention, and thus perhaps a foreign-made deepfake, in the next US presidential race. He told ABC news: "I think you might want to listen, there isn’t anything wrong with listening, if somebody called from a country, Norway, [and said] 'we have information on your opponent' oh, I think I’d want to hear it."
He also said he might not report any such foreign intervention to the FBI: "It's not an interference, they have information – I think I'd take it. If I thought there was something wrong, I'd go maybe to the FBI… but when somebody comes up with 'oppo research', right, they come up with oppo research, 'Oh, let’s call the FBI.'"
Those comments have increased the political heat, especially if platforms like Facebook have decided they will not take down deepfakes. Experts warn that Trump's comments and Facebook's stance is a virtual invitation for foreign governments like Russia to produce faked footage and meddle in elections – complete with a clear path for doing so.
Last year, Schiff and other members of Congress said that there needed to be some kind of federal action on deepfakes – but little seems to have happened. With the 2020 White House race coming up, and given the extensively documented efforts to sway the previous elections, lawmakers are trying to find ways to limit foreign interference.
As such the blanket protections of Section 230 are likely to be a target. If it is amended to require organizations (or dare we say, publishers) like Facebook to perform "reasonable content moderation" then, in lawmakers' eyes, the impact of faked videos can be limited.
In the tech giants' eyes, it could open them up to enormous liability, and require huge resources to make sure their billions of users were not posting and reposting infringing content.
Section 230 has been opened up once: when the operators of Backpage.com refused to remove sex-trafficking ads and hid behind liability protections. In response, after months of trying and failing to force changes, lawmakers wrote a new law to remove that protection.
At the time, the internet titans backed away from using their full lobbying might, viewing the issue as a topic that would have little or no public support but, most importantly, in an effort to get lawmakers on side and avoid larger calls for regulation of tech giants.
But the momentum to force outfits like Google, Facebook, Amazon et al to be more accountable is seemingly unstoppable at this point. Facebook's own-goal in refusing to take down the doctored Pelosi video will only increase calls for Section 230 to be changed to make online platforms act in the wider public interest.
It is going to be very difficult to legislate for specific things like AI-engineered videos, so lawmakers may instead push for broader wording along the lines of "reasonable content moderation" to avoid having to constantly write new laws.
So far there has been no clear public response from Silicon Valley to the idea of obliging them to carry out content moderation under Section 230. But then it's not as if we don't know how they feel. They hate the idea. ®