Real-time crowdsourced fact checking not really that effective, study says

NYU boffins find the crowd is not all that wise when it comes to spotting misinformation

Social media companies have proposed enlisting their respective audiences to catch the misinformation they distribute, or are already doing so.

Facebook, now living under the assumed name Meta for its own protection, says, "We identify potential misinformation using signals, like feedback from people on Facebook, and surface the content to fact-checkers." And Facebook founder and Meta head Mark Zuckerberg suggested crowdsourced fact-checking in a 2019 video interview with Harvard Law Professor Jonathan Zittrain

Twitter meanwhile is testing "Birdwatch," which the company describes as "a new community-driven approach to help address misleading information on Twitter."

YouTube relies on an automated content flagging system, tormented content moderators, a Trusted Flaggers program, and reports from the broader community.

Judging by the ongoing availability of misinformation on social media platforms, these methods don't work all that well.

And when boffins from New York University’s Center for Social Media and Politics (CSMaP) set out to test the so-called wisdom of the crowd as defense against misinformation, the researchers came to the same conclusion.

In a paper titled, "Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking," researchers William Godel, Zeve Sanderson, Kevin Aslett, Jonathan Nagler, Richard Bonneau, Nathaniel Persily, and Joshua Tucker report that average Americans, and the machine learning models built from their input, don't measure up to professional fact checkers.

The CSMaP paper was published on Thursday in the inaugural issue of the Journal of Online Trust and Safety.

The wisdom of crowds?

The report authors examined how well real-time crowdsourced fact-checking works by selecting 135 popular news stories and having them evaluated by ordinary people and professional fact-checkers within 72 hours of publication.

They found that while machine learning models based on crowd input perform better than simple aggregation rules, both approaches fell short of fact checking pros. They also found that these automated mechanisms worked even better when the study respondents had high levels of political knowledge. This, they say, suggests "reason for caution for crowdsourced models that rely on a representative sample of the population."

"Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited – and have significant variation – in their ability to identify false news," the paper says.

Asked in an email whether this is simply a way of saying "editors" produce better results than a random set of people with no special knowledge, Joshua A. Tucker, professor of politics, an affiliated professor of Russian and Slavic studies, and an affiliated professor of data science, replied, "It’s not editors, but we do find that in terms of trying to identify the veracity of news, crowds made up of people that we would expect to be more knowledgeable about political news do a better job of correctly identifying whether the news is true or not than simply random crowds."

"This might not sound surprising, but the whole idea of the 'wisdom of the crowds' literature has tended to be that you can aggregate across people who are not particularly good at a task (like guessing the weight of a cow) and, despite lots of variation, get a very good guess on average. Here, we find evidence that more knowledgeable crowds get better answers."

As to the policies implications of the findings, Zeve Sanderson, founding executive director of CSMaP, said, "Our goal with this project was to understand whether the wisdom of the crowds literature – which is now 100 years old and covers a diverse array of topics – extends to fact checking news stories in real time."

"We were interested in this because of the scientific questions, as well as the implications for platform policies. Our research suggests that the most normatively pleasing crowd-based system – surveying a random sample of people and using simple aggregation rules – will likely be ineffective for fact checking news stories in the period directly following publication."

While the study shows that other approaches like more informed crowds or machine learning can lead to better results, it's not clear how well either of these withstand a rapidly changing information environment such as the ongoing pandemic.

"The possibility of crowdsourcing fact checking is something that platforms have publicly explored, and our results give reason for caution," said Sanderson.

Making it all better

In light of the paper's caution about how misinformation detection can become less reliable during times of rapid change, The Register asked to what extent adversarial efforts to distribute misinformation make things worse.

"To a large extent," said William Godel, a doctoral candidate in NYU’s Department of Politics and the study's lead author, "that is already the approach of many low credibility news sources, which is why we designed our study to evaluate real-time articles from low credibility sources. Despite this environment, crowdsourcing clearly did identify a useful signal, albeit a somewhat weak one."

"But a significant weakness of any moderating system is deciding what to review given the plethora of content. Any adversarial approach that could successfully avoid review in the first place could bypass this system entirely."

Godel reiterated Sanderson's observation that significant changes in the information environment – events that cause a spike in the proportion of false news, for example – make misinformation detection less effective.

"This suggests that these methods could share some of the potential brittleness that has characterized other uses of machine learning," said Godel.

The paper does not explore the ethical implications of crowdsourcing, something Tucker said is addressed in a report conducted by colleagues at NYU's Stern Center for Business and Human Rights.

Tucker said crowdsourcing likely appeals to platforms because it avoids making platforms like Facebook "arbiters of truth," as Mark Zuckerberg put it.

"So if Facebook can say 'we didn’t classify this as legitimate news, our users did,' that allows Facebook to avoid the question of why it has the right to say what is true and what is not," explained Tucker.

"And for what it is worth, it is not a priori clear that it would be less expensive to crowdsource fact checking, as there would still be costs to paying crowds. So the price would be a function of how you set the system up." ®

Similar topics

Other stories you might like

  • India reveals home-grown server that won't worry the leading edge

    And a National Blockchain Strategy that calls for gov to host BaaS

    India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

    The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

    Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

    Continue reading
  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading

Biting the hand that feeds IT © 1998–2021