Facebook has vowed to delete at least some fake videos that appear to have been manipulated by machine-learning algorithms to crack down on the spread of disinformation.
The AI technology used to create such forged content – generative adversarial networks (GANs) – has been rapidly improving over the past five years. GANs, introduced in 2014, were initially used for pretty innocent applications, such as creating digital art, though a sinister side was revealed when miscreants realized the neural networks could be trained to automatically paste people’s faces onto other people's bodies in videos. Think actresses automagically spliced into hardcore smut movies, politicians into damaging situations, and so on.
These so-called deepfakes thus make people appear to say or do things they hadn’t actually said or done. Deepfakes have become more convincing over time, prompting folks to fear the tech could be weaponized to influence the upcoming elections.
Now, Facebook has updated its policy to fight against specifically AI-generated deepfakes. Videos deemed to have been edited using machine-learning algorithms, and are realistic enough to fool people into believing the fictitious situation, will be removed from the antisocial platform, we're assured.
Deepfakes, quantum computing cracking codes, ransomware... Find out what's really freaking out Uncle SamREAD MORE
The ban does not extend to manipulated videos that are deemed to be parody or satire, nor if they have been altered in a way that just omits or alters the order of words spoken. By that standard, the viral maliciously edited video clip of Nancy Pelosi (D-CA), the Speaker of the House of Representatives, who appeared in the clip to be drunk and slurring her words, would not be removed, since Pelosi’s speech in the fake video was merely slowed down and it was not manipulated using fancy AI algorithms. Under the new policy, the viral video would instead be flagged as misleading.
Any adverts or user posts caught inappropriately using AI-generated deepfake videos will otherwise be thrown out, we're told, and any material not taken down may be marked as misleading anyway by the antisocial network.
“Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers,” said Facebook’s Monika Bickert, veep of global policy management, on Monday night.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
Generating fake media using machine-learning algorithms is easy, though detecting it using similar techniques is tricky but not impossible. Facebook recently invested $10m, a little over eight hours of quarterly profit, to building a system to detect deepfakes. It launched a competition, known as the Deepfake Detection Challenge, to spur computer scientists to develop detection techniques last September.
Fake videos aren’t the only form of media that Facebook have been grappling with. Last month, it disclosed that false photos of people generated using GANs were being used to front fake accounts. Over 900 of these sham groups, pages, and accounts made on Facebook and Instagram have since been removed. ®