This article is more than 1 year old

Deplatforming hate forums doesn't work, British boffins warn

Industry intervention alone can't deal with harassment

Updated Depriving online hate groups of network services - otherwise known as deplatforming - doesn't work very well, according to boffins based in the United Kingdom.

In a recently released preprint paper, Anh Vu, Alice Hutchings, and Ross Anderson, from the University of Cambridge and the University of Edinburgh, examine efforts to disrupt harassment forum Kiwi Farms and find that community and industry interventions have been largely ineffective.

Their study, undertaken as lawmakers around the world are considering policies that aspire to moderate unlawful or undesirable online behavior, reveals that deplatforming has only a modest impact and those running harmful sites remain free to carry on harassing people through other services.

"Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter and Reddit, limit the spread of conspiratorial disinformation on Facebook, and minimize disinformation and extreme speech on YouTube," they write in their paper. "But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalized."

As examples, they cite how Reddit's ban of r/incels in November 2017 led to the creation of two incel domains, which then grew rapidly. They also point to how users banned from Twitter and Reddit "exhibit higher levels of toxicity when migrating to Gab," among other similar situations.

The researchers focus on the deplatforming of Kiwi Farms, an online forum where users participate in efforts to harass prominent online figures. One such person was a Canadian transgender streamer known as @keffals on Twitter and Twitch.

In early August last year, a Kiwi Farms forum member allegedly sent a malicious warning to police in London, Ontario, claiming that @keffals had committed murder and was planning further violence, which resulted in her being "swatted - a form of attack that has proved lethal in some cases.

Following further doxxing, threats, and harassment, @keffals organized a successful campaign to pressure Cloudflare to stop providing Kiwi Farms with reverse proxy security protection, which helped the forum defend against denial-of-service attacks.

The research paper outlines the various interventions taken by internet companies against Kiwi Farms. After Cloudflare dropped Kiwi Farms on September 3 last year, DDoS-Guard did so two days later. The following day, the Internet Archive and hCaptcha severed ties.

On September 10, the kiwifarms.is domain stopped working. Five days later, security firm DiamWall suspended service for those operating the site.

On September 18, all the domains used by the forum became inaccessible, possibly related to an alleged data breach. But then, as the researchers observe, the Kiwi Farms dark web forum was back by September 29. There were further intermittent outages on October 9 and October 22, but since then Kiwi Farms has been active, apart from brief service interruptions.

"The disruption was more effective than previous DDoS attacks on the forum, as observed from our datasets. Yet the impact, although considerable, was short-lived." the researchers state.

"While part of the activity was shifted to Telegram, half of the core members returned quickly after the forum recovered. And while most casual users were shaken off, others turned up to replace them. Cutting forum activity and users by half might be a success if the goal of the campaign is just to hurt the forum, but if the objective was to 'drop the forum,' it has failed."

Hate is difficult to shift

One reason for the durability of such sites, the authors suggest, is that activists get bored and move on, while trolls are motivated to endure and survive. They argue that deplatforming doesn't look like a long-term solution because, while casual harassment forum participants may scatter, core members become more determined and can recruit replacements through the publicity arising from censorship.

Vu, Hutchings, and Anderson argue that deplatforming by itself is insufficient and needs to be done in the context of a legal regime that can enforce compliance. Unfortunately, they note, this framework doesn't currently exist.

"We believe the harms and threats associated with online hate communities may justify action despite the right to free speech," the authors conclude. "But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law."

They also contend that police work needs to be paired with social work, specifically education and psycho-social support, to deprogram hate among participants in such forums.

"There are multiple research programs and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate," they argue. "But most countries still lack a unifying strategy for violence reduction." ®

Updated to add

In a comment received after this story was filed, a spokesperson for the Anti-Defamation League disagreed with the report's findings. "We issued a report on this in February, and, in short, we do believe deplatforming is effective," the ADL spokesperson said.

More about

TIP US OFF

Send us news


Other stories you might like