This article is more than 1 year old

Facebook approved 75% of ads threatening US election workers

Not a good look for Meta's content moderation team

Just before the US midterm elections last month, researchers from non-profit Global Witness and New York University submitted ads containing death threats against election workers to Meta's Facebook, Google's YouTube, and TikTok.

YouTube and TikTok caught the policy-violating ads, removed them, and suspended the associated advertising accounts; Facebook, however, authorized most of the death threats – 15 out of 20 – to be displayed.

"The platform approved nine of the ten English-language death threats for publication and six of the ten Spanish-language death threats," Global Witness said in a statement. "Our account was not closed down despite a handful of ads having been identified as violating their policies."

The ads submitted were based on real examples of death threats against election workers that had been publicly reported. They consisted of an image of an election worker with a death threat above it. The messages claimed people would be executed, killed, or hanged, and that children would be molested, but they were edited to be more readable.

"We removed profanity from the death threats and corrected grammatical errors, as in a previous investigation Facebook initially rejected ads containing hate speech for these reasons and then accepted them for publication once we’d edited them," explained the researchers from Global Witness and the NYU Cybersecurity for Democracy (C4D) team.

With more polished prose, the ads were submitted in English and in Spanish on the day of, or the day before, the US midterm elections. While YouTube and TikTok caught the threatening ads, Facebook mostly let them through.

In terms of monthly active users in the US, Facebook has about 266 million, TikTok has about 94 million, and YouTube has about 247 million.

Asked to comment, a Meta spokesperson repeated the reply given to the researchers: "This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms. We remain committed to continuing to improve our systems."

That's an odd statement given that no one claims the majority of Facebook ads are death threats – the problem is that any death threats were approved for distribution. It's as if the makers of Tylenol in 1982 had responded to the seven fatalities linked to drug tampering by observing that most people don't get poisoned.

As to whether recent reporting suggests Meta vets ads more effectively than rivals, the Global Witness and C4D team said they asked Meta to support its claim that it handles incitement to violence better than other platforms.

Meta, they said, pointed to quotes culled from news reports – such as this report from the New York Times – indicating that the company devotes more resources to fighting manipulation than other platforms and that it fares better than alt-right platforms like Parler and Gab – not exactly known for their sensitivity to misinformation.

"While these assertions may be factual, they don’t constitute evidence that Meta is better at detecting incitement to violence than other mainstream platforms," the researchers said. "In addition, there should be no tolerance for failure before a major election, when tensions and potential for harm are high."

Misinformation, on the other hand...

However, Meta's Facebook did come out looking better than TikTok when the same researchers looked at how Facebook, TikTok, and YouTube handled election misinformation (rather than death threats) two months ago.

In that study, TikTok was a disaster, approving 90 percent of ads containing false and misleading election information. Facebook was partially effective – for example, in one test, 20 percent of English disinformation ads were approved, and 50 percent of Spanish disinformation ads were approved. And YouTube shined, detecting dubious ads and suspending the channels carrying them.

Such statistics, however, vary based on where the testing is done, the researchers observe, pointing to Facebook's failure to stop any of the election disinformation ads tested in Brazil or any of the hate speech ads tested in Myanmar, Ethiopia, and Kenya.

The researchers argue that social media platforms need to treat users equally no matter where they are in the world and that they need to enforce their policies effectively.

They call on Meta to do more to moderate content related to elections, to pay moderate staff adequately, to publish reports on how their services handle societal risks, to make all ad details public, and allow third-party ad auditing, and to publish pre-election risk assessments. ®

More about

TIP US OFF

Send us news


Other stories you might like