This article is more than 1 year old

Real-time crowdsourced fact checking not really that effective, study says

NYU boffins find the crowd is not all that wise when it comes to spotting misinformation

Social media companies have proposed enlisting their respective audiences to catch the misinformation they distribute, or are already doing so.

Facebook, now living under the assumed name Meta for its own protection, says, "We identify potential misinformation using signals, like feedback from people on Facebook, and surface the content to fact-checkers." And Facebook founder and Meta head Mark Zuckerberg suggested crowdsourced fact-checking in a 2019 video interview with Harvard Law Professor Jonathan Zittrain

Twitter meanwhile is testing "Birdwatch," which the company describes as "a new community-driven approach to help address misleading information on Twitter."

YouTube relies on an automated content flagging system, tormented content moderators, a Trusted Flaggers program, and reports from the broader community.

Judging by the ongoing availability of misinformation on social media platforms, these methods don't work all that well.

And when boffins from New York University’s Center for Social Media and Politics (CSMaP) set out to test the so-called wisdom of the crowd as defense against misinformation, the researchers came to the same conclusion.

In a paper titled, "Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking," researchers William Godel, Zeve Sanderson, Kevin Aslett, Jonathan Nagler, Richard Bonneau, Nathaniel Persily, and Joshua Tucker report that average Americans, and the machine learning models built from their input, don't measure up to professional fact checkers.

The CSMaP paper was published on Thursday in the inaugural issue of the Journal of Online Trust and Safety.

The wisdom of crowds?

The report authors examined how well real-time crowdsourced fact-checking works by selecting 135 popular news stories and having them evaluated by ordinary people and professional fact-checkers within 72 hours of publication.

They found that while machine learning models based on crowd input perform better than simple aggregation rules, both approaches fell short of fact checking pros. They also found that these automated mechanisms worked even better when the study respondents had high levels of political knowledge. This, they say, suggests "reason for caution for crowdsourced models that rely on a representative sample of the population."

"Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited – and have significant variation – in their ability to identify false news," the paper says.

Asked in an email whether this is simply a way of saying "editors" produce better results than a random set of people with no special knowledge, Joshua A. Tucker, professor of politics, an affiliated professor of Russian and Slavic studies, and an affiliated professor of data science, replied, "It’s not editors, but we do find that in terms of trying to identify the veracity of news, crowds made up of people that we would expect to be more knowledgeable about political news do a better job of correctly identifying whether the news is true or not than simply random crowds."

"This might not sound surprising, but the whole idea of the 'wisdom of the crowds' literature has tended to be that you can aggregate across people who are not particularly good at a task (like guessing the weight of a cow) and, despite lots of variation, get a very good guess on average. Here, we find evidence that more knowledgeable crowds get better answers."

As to the policies implications of the findings, Zeve Sanderson, founding executive director of CSMaP, said, "Our goal with this project was to understand whether the wisdom of the crowds literature – which is now 100 years old and covers a diverse array of topics – extends to fact checking news stories in real time."

"We were interested in this because of the scientific questions, as well as the implications for platform policies. Our research suggests that the most normatively pleasing crowd-based system – surveying a random sample of people and using simple aggregation rules – will likely be ineffective for fact checking news stories in the period directly following publication."

While the study shows that other approaches like more informed crowds or machine learning can lead to better results, it's not clear how well either of these withstand a rapidly changing information environment such as the ongoing pandemic.

"The possibility of crowdsourcing fact checking is something that platforms have publicly explored, and our results give reason for caution," said Sanderson.

Making it all better

In light of the paper's caution about how misinformation detection can become less reliable during times of rapid change, The Register asked to what extent adversarial efforts to distribute misinformation make things worse.

"To a large extent," said William Godel, a doctoral candidate in NYU’s Department of Politics and the study's lead author, "that is already the approach of many low credibility news sources, which is why we designed our study to evaluate real-time articles from low credibility sources. Despite this environment, crowdsourcing clearly did identify a useful signal, albeit a somewhat weak one."

"But a significant weakness of any moderating system is deciding what to review given the plethora of content. Any adversarial approach that could successfully avoid review in the first place could bypass this system entirely."

Godel reiterated Sanderson's observation that significant changes in the information environment – events that cause a spike in the proportion of false news, for example – make misinformation detection less effective.

"This suggests that these methods could share some of the potential brittleness that has characterized other uses of machine learning," said Godel.

The paper does not explore the ethical implications of crowdsourcing, something Tucker said is addressed in a report conducted by colleagues at NYU's Stern Center for Business and Human Rights.

Tucker said crowdsourcing likely appeals to platforms because it avoids making platforms like Facebook "arbiters of truth," as Mark Zuckerberg put it.

"So if Facebook can say 'we didn’t classify this as legitimate news, our users did,' that allows Facebook to avoid the question of why it has the right to say what is true and what is not," explained Tucker.

"And for what it is worth, it is not a priori clear that it would be less expensive to crowdsource fact checking, as there would still be costs to paying crowds. So the price would be a function of how you set the system up." ®

More about

TIP US OFF

Send us news


Other stories you might like