Being asked to rate fake news may help stop social media users sharing it, study finds

Could slow the spread of misinformation without needing draconian law

Research including a Twitter field experiment has found social media organisations might have a 3rd option that doesn't involve the banhammer or a laissez faire attitude to tackling the fake news plague infecting platforms.

Sandwiched between an instinct for a less costly hands-off approach to information monitoring, and the unpalatable prospect of draconian policing of news content, social media giants such as Twitter, Facebook, YouTube and Instagram, which boast billions of users between them worldwide, have done little about it.

Only relatively recently has Facebook launched a campaign to help users spot fake news.

However, Gordon Pennycook, professor at Canada's University of Regina, and his team have shown there is more social media companies could do to stem the flow of fake news.

"Although misinformation is nothing new, the topic gained prominence in 2016 after the US Presidential Election and the UK's Brexit referendum, during which entirely fabricated stories (presented as legitimate news) received wide distribution via social media," the paper published in Nature said.

The researchers first performed survey experiments in which American individuals using crowdsourcing website Amazon Mechanical Turk to recruit workers were asked to rate the accuracy of news and whether they would share it, controlling for political affiliation.

Although true headlines were rated as "accurate" more often than false headlines, experimental subjects were twice as likely to consider sharing false headlines that fitted with their political outlook than they were to rate such headlines as accurate, implying that on some level they were happy to share information they knew was inaccurate.

However, most of them also said it was "extremely important" to share only accurate information on social media, leading the researchers to conclude some people may not intentionally spread misinformation.

Sharing while 'distracted'

A subsequent Twitter field experiment involved 5,379 users who had recently shared links to websites that regularly produce misleading and hyper-partisan content. They were sent an unsolicited message asking them to rate the accuracy of a single non-political headline. Researchers then compared the quality of the news sites shared in the 24 hours after receiving the message to the sites shared by participants who had not yet received the message.

The research showed that when individuals were sent a private message asking them to rate news accuracy, the accuracy and quality of the news sources they shared improved.

"These studies suggest that when deciding what to share on social media, people are often distracted from considering the accuracy of the content," the paper said.

The current design of social media platforms means users get instant social feedback for sharing snippets from the mix of serious news and emotionally engaging content they have quickly scrolled through.

It's a model which "may discourage people from reflecting on accuracy," they said.

*Shocked face*

"But this need not be the case. Our treatment translates easily into interventions that social media platforms could use to increase users' focus on accuracy. For example, platforms could periodically ask users to rate the accuracy of randomly selected headlines, thus reminding them about accuracy in a subtle way," the paper said.

It might also avoid users reacting against guidance debunking fake news, which has been a recorded phenomenon. The authors cited an upcoming paper*, "Perverse Consequences of Debunking in a Twitter Field Experiment", which shows that "being corrected for posting false news increases subsequent sharing of low quality, partisan, and toxic content."

Using the approach of reminding social media users about accuracy, the authors wrote, "could potentially increase the quality of news circulating online without relying on a centralized institution to certify truth and censor falsehood."

We can but hope.

Whatever the solution, the problem of fake news is more than a political irritant. YouTube has removed more than 30,000 misleading COVID-19 vaccination videos in the past five months, according to reports.

At the height of the global COVID-19 pandemic in April, 82 websites spreading health misinformation attracted an estimated 460 million views on Facebook. Research by US-based global activist org Avaaz found that bogus health claims on Facebook accrued an estimated 3.8 billion views in one year.

The latest study may provide a tool to help social media giants curb the spread of misinformation on their platforms. Whether they will choose to use it or not might depend entirely on their business models. ®


*We don't have a link yet: it will be presented at this year's CHI Conference on Human Factors in Computing Systems, which is to take place in May 2021.

Narrower topics

Other stories you might like

  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading
  • Big Tech loves talking up privacy – while trying to kill privacy legislation
    Study claims Amazon, Apple, Google, Meta, Microsoft work to derail data rules

    Amazon, Apple, Google, Meta, and Microsoft often support privacy in public statements, but behind the scenes they've been working through some common organizations to weaken or kill privacy legislation in US states.

    That's according to a report this week from news non-profit The Markup, which said the corporations hire lobbyists from the same few groups and law firms to defang or drown state privacy bills.

    The report examined 31 states when state legislatures were considering privacy legislation and identified 445 lobbyists and lobbying firms working on behalf of Amazon, Apple, Google, Meta, and Microsoft, along with industry groups like TechNet and the State Privacy and Security Coalition.

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading

Biting the hand that feeds IT © 1998–2022