This article is more than 1 year old
Once again, Facebook champions privacy ... of its algorithms: Independent probe into Instagram shut down
AlgorithmWatch ends newsfeed study after 'thinly veiled threat'
AlgorithmWatch, a non-profit group based in Germany, said it has been forced to end its efforts to monitor Instagram's newsfeed after parent company Facebook intervened.
In July, the advocacy organization shuttered its Instagram transparency project, launched in March, 2020, because of alleged veiled legal threats after Facebook claimed the group's data-collecting browser extension violated its Terms of Service and Europe's GDPR.
"On 13 July, we took the decision to terminate the project and delete any collected data (media partners still have fully anonymized versions of the data)," said Nicolas Kayser-Bril, a data journalist with AlgorithmWatch, in a blog post published on Friday. "Ultimately, an organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars."
Kayser-Bril said AlgorithmWatch chose to speak up after Facebook earlier this month shut down a similar transparency project, run by New York University researchers, to examine who sees Facebook's ads and what the ads say.
An organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars
The antisocial network suggested it had to block the NYU ad research under the terms of its 2019 settlement with the US Federal Trade Commission for violating consumers' privacy.
Challenged about that assertion, Facebook subsequently backtracked (though continued to cite privacy concerns as its justification) and the FTC took the unusual step of issuing a statement refuting the internet goliath's assertion and endorsing scrutiny of "surveillance-based advertising."
"The FTC is committed to protecting the privacy of people, and efforts to shield targeted advertising practices from scrutiny run counter to that mission," said Samuel Levine, Acting Director of the FTC's Bureau of Consumer Protection.
The Instagram project, supported by various European media organizations, is said to have attracted over 1,400 volunteers who installed the AlgorithmWatch browser extension over the past 14 months.
The extension – available for Chrome and Firefox, which prefers the term "add-on" – is designed to look for the posts of a set of Instagram users who earn income from the platform. It asks volunteers to follow three specific accounts and then gathers data about the images and videos that show up in their newsfeeds, as well as some of the accounts followed by participants.
"All the information is anonymized in a way that makes it impossible for us to re-identify data donors," the group claims.
According to Kayser-Bril, the information gathered about Instagram's automated decision making showed that the service "likely encouraged content creators to post pictures that fitted specific representations of their body, and that politicians were likely to reach a larger audience if they abstained from using text in their publications," findings that Facebook is said to have denied.
- Facebook takes bold stance on privacy – of its ads: Independent transparency research blocked
- Perhaps regretting those Instagram, WhatsApp acquisitions, UK watchdog suggests Facebook offloads GIF haven Giphy
- Er, no, we would like to continue suing Facebook, US state AGs tell courts
- Even Facebook struggles: Zuck's titanic database upgrade hits numerous legacy software bergs
- Facebook gardening group triumphs over slapdash Zuck censorbots
Kayser-Bril said the need to understand how algorithms shape media is underscored by recent reports in Colombia and the Middle East that images of protests have been removed without notice.
"Without independent public interest research and rigorous controls from regulators, it is impossible to know whether Instagram’s algorithms favor specific political opinions over others," said Kayser-Bril. "Previous reporting in the United States showed that Facebook took some product decisions in order to protect alt-right figures."
In an email to The Register, a Facebook spokesperson insisted the ad mega-corp is only acting to protect user privacy.
"We believe in independent research into our platform and have worked hard to allow many groups to do it, including AlgorithmWatch – but just not at the expense of anyone’s privacy," the pokesperson said. "We had concerns with their practices, which is why we contacted them multiple times so they could come into compliance with our terms and continue their research, as we routinely do with other research groups when we identify similar concerns."
Facebook also denied making any overt threats – the researchers cited "a thinly veiled threat."
"We did not threaten to sue them," the social network's spokesperson said. "The signatories of this letter believe in transparency – and so do we. We collaborate with hundreds of research groups to enable the study of important topics, including by providing data sets and access to APIs, and recently published information explaining how our systems work and why you see what you see on our platform. We intend to keep working with independent researchers, but in ways that don’t put people’s data or privacy at risk."
Asked whether Facebook intends to reinstate the cancelled accounts of the NYU researchers, the spokesperson declined to comment. ®