Tobacco giants don't get to decide who does research on smoking. Why does Facebook get to dictate studies?

Boffins tell US lawmakers social media titans cannot be trusted to police themselves

On Tuesday, lawmakers from the US House of Representatives heard from three academics who argued that social media companies cannot be trusted to police themselves.

That might seem like a foregone conclusion given Facebook's serial involvement in controversies over the past several years, its longstanding allergy to oversight, and its unrepentant decision this week to halt the development of "Instagram Kids" only after being pilloried by press reports that the company's own research acknowledged Instagram's mental toll on teens, particularly teen girls.

And that's what it was. The title of the hearing was its own spoiler: The Disinformation Black Box: Researching Social Media Data. Despite Facebook's public relations pushback over the weekend against its accusers – it challenged the claim that Instagram is 'toxic' to teens – the legislators involved came convinced that social media distributes misinformation and that it operates without adequate scrutiny.

Researchers can only see what companies want them to. And access can be cut off at any time

Citing the damage caused by misinformation – the US Capitol insurrection in January, lies about the severity of COVID-19, and ongoing vaccine disinformation – Bill Foster (D-IL), Chairman of the US House of Representatives Science, Space, and Technology Committee's Subcommittee on Investigations and Oversight, characterized social media manipulation as a public health threat and lamented the refusal of social media firms to provide the internal data necessary to respond.

"Unfortunately, it is extremely difficult for researchers to gain sufficient access to social media data," said Foster.

"Companies do make some information public, but it is largely through interfaces they control, meaning that researchers can only see what companies want them to. And access can be cut off at any time."

Chickens come home to roost

Foster might have had a recent episode of access denial in mind: Facebook's decision in August to terminate the accounts of NYU researchers who were investigating the company's ad operations.

Laura Edelson, a doctoral candidate in Computer Science at New York University and one of the researchers who lost access to Facebook because of her work on NYU's Ad Observatory Project, was among the three witnesses offering testimony – via video statements and longer written remarks.

"Tobacco companies don't get to decide who does research on smoking and the idea that social media companies get to decide who studies them is perverse," said Edelson.

"Lack of data is currently the most serious barrier to the work of misinformation researchers," she said, noting that Twitter is the only major social media company that allows researchers access to public data, though at a high cost.

Facebook bought a company called CrowdTangle in 2016, which a few researchers use for access although it's mainly offered as a business analytics product. And other platforms like YouTube and TikTok, she said, offer no suitable tools.

Researchers from her own team, from Mozilla, and journalists, she said, have attempted to crowdsource social media data. But some of the social media platforms, she said, have been hostile, and she pointed to Facebook's cancellation of her research team's accounts this summer and to the company's legal threats against Germany's Algorithm Watch.

"It's time for Congress to act to ensure that researchers and the public have access to data that we need to protect ourselves from online misinformation," she said.

No incentive, and no oversight

Alan Mislove, Professor and Interim Dean of the Khoury College of Computer Sciences at Northeastern University, came to a similar conclusion.

"Social media platforms do not currently have the proper incentives to allow research on their platforms, and have been observed to be actively hostile to important, ethical research that is in the public interest," he said.

"At the same time that such platforms' power and influence is reaching new heights, our ability as independent researchers to understand the impact that they are having is being reduced each day. Thus, I and other researchers need Congress' help to enable researchers to have sufficient access to data from social media platforms in order to ensure that the benets of these platforms do not come at a cost that is too high for society to bear."

Kevin Leicht, Professor of Sociology University of Illinois Urbana-Champaign, offered a message along the same lines. "The biggest gap that we see in doing research is in the data and algorithms or the black box the social media companies use to determine what end users see," he said. "And at some level we need access not only to the data but to the black box."

Edelson in her prepared remarks urged Congress to pass a universal digital ad transparency law that would require digital ad platforms to make their ads available in a machine-readable format. And she said she intends to publish a draft proposal soon.

Mislove said proposed legislation, such as the Algorithmic Justice and Online Platform Transparency Act of 2021 and the Social Media Disclosure And Transparency of Advertisements (DATA) Act of 2021, would be helpful to researchers.

Mozilla, which helped vet NYU's Ad Observatory software, also recently endorsed the Social Media DATA Act to ensure ad platform transparency.

"Transparency is the first, unescapable step toward holding social media platforms accountable for harmful outcomes," said Marshall Erwin, Chief Security Officer at Mozilla, in a statement emailed to The Register.

"Without insights into what people experience, what ads are presented to them and why, what content is recommended to them and why, we cannot begin to understand how misinformation spreads."

Facebook did not respond to a request for comment. ®

Similar topics

Other stories you might like

  • Research finds consumer-grade IoT devices showing up... on corporate networks

    Considering the slack security of such kit, it's a perfect storm

    Increasing numbers of "non-business" Internet of Things devices are showing up inside corporate networks, Palo Alto Networks has warned, saying that smart lightbulbs and internet-connected pet feeders may not feature in organisations' threat models.

    According to Greg Day, VP and CSO EMEA of the US-based enterprise networking firm: "When you consider that the security controls in consumer IoT devices are minimal, so as not to increase the price, the lack of visibility coupled with increased remote working could lead to serious cybersecurity incidents."

    The company surveyed 1,900 IT decision-makers across 18 countries including the UK, US, Germany, the Netherlands and Australia, finding that just over three quarters (78 per cent) of them reported an increase in non-business IoT devices connected to their org's networks.

    Continue reading
  • Huawei appears to have quenched its thirst for power in favour of more efficient 5G

    Never mind the performance, man, think of the planet

    MBB Forum 2021 The "G" in 5G stands for Green, if the hours of keynotes at the Mobile Broadband Forum in Dubai are to be believed.

    Run by Huawei, the forum was a mixture of in-person event and talking heads over occasionally grainy video and kicked off with an admission by Ken Hu, rotating chairman of the Shenzhen-based electronics giant, that the adoption of 5G – with its promise of faster speeds, higher bandwidth and lower latency – was still quite low for some applications.

    Despite the dream five years ago, that the tech would link up everything, "we have not connected all things," Hu said.

    Continue reading
  • What is self-learning AI and how does it tackle ransomware?

    Darktrace: Why you need defence that operates at machine speed

    Sponsored There used to be two certainties in life - death and taxes - but thanks to online crooks around the world, there's a third: ransomware. This attack mechanism continues to gain traction because of its phenomenal success. Despite admonishments from governments, victims continue to pay up using low-friction cryptocurrency channels, emboldening criminal groups even further.

    Darktrace, the AI-powered security company that went public this spring, aims to stop the spread of ransomware by preventing its customers from becoming victims at all. To do that, they need a defence mechanism that operates at machine speed, explains its director of threat hunting Max Heinemeyer.

    According to Darktrace's 2021 Ransomware Threat Report [PDF], ransomware attacks are on the rise. It warns that businesses will experience these attacks every 11 seconds in 2021, up from 40 seconds in 2016.

    Continue reading

Biting the hand that feeds IT © 1998–2021