Meta watchdog sticks a probe into Facebook rules after fake Biden vid allowed to stay
Doctored video featured vile false slur, but it wasn't a deepfake ... so that's OK, then?
Meta's Oversight Board is probing the social media giant's policies on deepfake content after Facebook decided against taking down a faked video that falsely labelled US President Joe Biden a pedophile.
The bogus vid adapted footage of Biden placing an "I voted" sticker above his granddaughter Natalie Biden's chest during America's 2022 midterm elections.
The seven-second video, shared in May this year, was doctored and looped to make it appear as if the President touched his granddaughter inappropriately, with NSFW lyrics from the song "Simon Says" by rapper Pharoahe Monch playing in the background. A caption in the video wrongly claimed Biden is "a sick pedophile," and claimed people who voted for him in the election were "mentally unwell."
Despite a complaint from a user, Meta’s moderators did not remove the clip. The Facebook user who made the report appealed the decision to retain the video, according to the oversight board.
Meta again decided not to remove the fake video, which was admittedly viewed fewer than 30 times as of last month, because it was not generated using AI and passed off as authentic nor did it feature any manipulation of Biden's speech to make him seem like he said something he never uttered.
The complainant eventually raised the issue with the oversight board, an independent panel of experts recruited by Meta to review content moderation policies.
"The board selected this case to assess whether Meta's policies adequately cover altered videos that could mislead people into believing politicians have taken actions, outside of speech, that they have not," the group wrote in a statement.
"This case falls within the board's elections and civic space and automated enforcement of policies and curation of content priorities."
- Meta's Oversight Board wants a prime minister banned from Facebook and Instagram
- Meta has nothing to say about politicians making deepfaked ads
- AI-created faces now look so real, humans can't spot the difference
Facebook's manipulated media policies state that users should not post synthetic videos generated using "artificial intelligence or machine learning, including deep learning techniques (e.g. a technical deepfake), that merges, combines, replaces, and/or superimposes content onto a video, creating a video that appears authentic," nor content that "would likely mislead an average person to believe a subject of the video said words that they did not say."
The fake Biden video under consideration didn't violate either rule, and thus was allowed to remain online. It wasn't machine made nor put words in the President's mouth.
Be as that may, Meta's stated efforts to tackle and reduce political misinformation may be diluted if such content is allowed to proliferate. There may be an imbalance in the rules if the above vid is allowed to stay up, all because it was clumsily edited by a human, while there is a crackdown on deepfakes. If AI made the doctored video, would it have come down? But if human-made, it doesn't have to?
As such, the board is inviting comment and ideas from the public on the following areas in light of this case:
- Research into online trends of using altered or manipulated video content to influence the perception of political figures, especially in the United States.
- The suitability of Meta’s misinformation policies, including on manipulated media, to respond to present and future challenges in this area, particularly in the context of elections.
- Meta’s human rights responsibilities when it comes to video content that has been altered to create a misleading impression of a public figure, and how they should be understood with developments in generative artificial intelligence in mind.
- Challenges to and best practices in authenticating video content at scale, including by using automation.
- Research into the efficacy of alternative responses to political disinformation or misinformation beyond content removal, such as fact-checking programs or labelling (also known as “inform treatments”). Additionally, research on avoiding bias in such responses.
Armed with that input, the panel is expected to review the policies and make suggestions to Meta – though not a lot may come from it.
"As part of its decisions, the board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days," the panel admitted. "As such, the board welcomes public comments proposing recommendations that are relevant to this case."
Experts and lawmakers are meanwhile increasingly concerned about deepfakes manipulating political discourse ahead of the upcoming 2024 US presidential election.
Last week, US Senator Amy Klobuchar (D-MN) and House Representative Yvette Clarke (D-NY) sent letters to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino seeking to clarify their content policies regarding political deepfakes.
"With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues," the letter stated, AP reported.
The Register has asked Meta for comment. ®