This article is more than 1 year old

Being asked to rate fake news may help stop social media users sharing it, study finds

Could slow the spread of misinformation without needing draconian law

Research including a Twitter field experiment has found social media organisations might have a 3rd option that doesn't involve the banhammer or a laissez faire attitude to tackling the fake news plague infecting platforms.

Sandwiched between an instinct for a less costly hands-off approach to information monitoring, and the unpalatable prospect of draconian policing of news content, social media giants such as Twitter, Facebook, YouTube and Instagram, which boast billions of users between them worldwide, have done little about it.

Only relatively recently has Facebook launched a campaign to help users spot fake news.

However, Gordon Pennycook, professor at Canada's University of Regina, and his team have shown there is more social media companies could do to stem the flow of fake news.

"Although misinformation is nothing new, the topic gained prominence in 2016 after the US Presidential Election and the UK's Brexit referendum, during which entirely fabricated stories (presented as legitimate news) received wide distribution via social media," the paper published in Nature said.

The researchers first performed survey experiments in which American individuals using crowdsourcing website Amazon Mechanical Turk to recruit workers were asked to rate the accuracy of news and whether they would share it, controlling for political affiliation.

Although true headlines were rated as "accurate" more often than false headlines, experimental subjects were twice as likely to consider sharing false headlines that fitted with their political outlook than they were to rate such headlines as accurate, implying that on some level they were happy to share information they knew was inaccurate.

However, most of them also said it was "extremely important" to share only accurate information on social media, leading the researchers to conclude some people may not intentionally spread misinformation.

Sharing while 'distracted'

A subsequent Twitter field experiment involved 5,379 users who had recently shared links to websites that regularly produce misleading and hyper-partisan content. They were sent an unsolicited message asking them to rate the accuracy of a single non-political headline. Researchers then compared the quality of the news sites shared in the 24 hours after receiving the message to the sites shared by participants who had not yet received the message.

The research showed that when individuals were sent a private message asking them to rate news accuracy, the accuracy and quality of the news sources they shared improved.

"These studies suggest that when deciding what to share on social media, people are often distracted from considering the accuracy of the content," the paper said.

The current design of social media platforms means users get instant social feedback for sharing snippets from the mix of serious news and emotionally engaging content they have quickly scrolled through.

It's a model which "may discourage people from reflecting on accuracy," they said.

*Shocked face*

"But this need not be the case. Our treatment translates easily into interventions that social media platforms could use to increase users' focus on accuracy. For example, platforms could periodically ask users to rate the accuracy of randomly selected headlines, thus reminding them about accuracy in a subtle way," the paper said.

It might also avoid users reacting against guidance debunking fake news, which has been a recorded phenomenon. The authors cited an upcoming paper*, "Perverse Consequences of Debunking in a Twitter Field Experiment", which shows that "being corrected for posting false news increases subsequent sharing of low quality, partisan, and toxic content."

Using the approach of reminding social media users about accuracy, the authors wrote, "could potentially increase the quality of news circulating online without relying on a centralized institution to certify truth and censor falsehood."

We can but hope.

Whatever the solution, the problem of fake news is more than a political irritant. YouTube has removed more than 30,000 misleading COVID-19 vaccination videos in the past five months, according to reports.

At the height of the global COVID-19 pandemic in April, 82 websites spreading health misinformation attracted an estimated 460 million views on Facebook. Research by US-based global activist org Avaaz found that bogus health claims on Facebook accrued an estimated 3.8 billion views in one year.

The latest study may provide a tool to help social media giants curb the spread of misinformation on their platforms. Whether they will choose to use it or not might depend entirely on their business models. ®

Bootnote

*We don't have a link yet: it will be presented at this year's CHI Conference on Human Factors in Computing Systems, which is to take place in May 2021.

More about

TIP US OFF

Send us news


Other stories you might like