This article is more than 1 year old

TikTok proposes coalition with other social apps to curb harmful content

Seemingly ignorant of the fact that Microsoft, Google, Facebook and Twitter already operate a similar project

Made-in-China social media app TikTok has proposed a code that a coalition of social media firms would use to help identify and remove harmful content from their platforms.

TikTok's interim boss, Vanessa Papas, sent a letter on Monday to nine social media companies proposing a “collaborative approach to early identification and notification… of extremely violent, graphic content, including suicide”.

“By working together and creating a hashbank for violent and graphic content, we could significantly reduce the chances of people encountering and enduring the emotional harm that viewing such content can bring,” the company said in a statement.

The plan appears to be a response to a recent video that appeared on the platform and reportedly displayed a suicide. So graphic was the video that schools around the world sent warnings to parents about its content and the company was yesterday hauled before a UK parliamentary committee to explain how the video nasty went viral.

The company's European director of public policy, Theo Bertram, told MPs the video, which was originally broadcast on Facebook, was "the result of a coordinated attack from the dark web".

“What we saw was a group of users who were repeatedly attempting to upload the video to our platform, and splicing it, editing it, cutting it in different ways,” he said. “I don’t want to say too much publicly in this forum about how we detect and manage that, but our emergency machine-learning services kicked in, and they detected the videos.”

TikTok has not named the nine social networks it has tried to enlist in its plan. Nor did it acknowledge that a coalition to remove online nasties already exists in the form of the Global Internet Forum to Counter Terrorism (GIFCT).

GIFCT was formed in 2017 by Facebook, Microsoft, Twitter and Google, and the four companies have pledged to "work together to refine and improve existing joint technical work, such as the Shared Industry Hash Database; exchange best practices as we develop and implement new content detection and classification techniques using machine learning; and define standard transparency reporting methods for terrorist content removals.”

Social networks have also acted in response to Facebook's propagation of a livestream of the 2019 Christchurch massacre.

In the aftermath of the shooting, Facebook said it removed 1.5m copies of the video in the first 24 hours after the attack, and later, an additional 4.5m pieces of content related to the attack using its “media-matching systems”.

The company later endorsed a set of non-binding commitments advanced by New Zealand’s prime minister, Jacinda Ardern, “to prevent the upload of terrorist and violent extremist content" and said it would temporarily ban users who violate its most serious policies from its live-streaming service, Facebook Live.

But critics have argued that Facebook is not shouldering enough responsibility of removing harmful content. Videos of the massacre that have been active since shortly after the shooting could still be found on the site a year later, according to the Tech Transparency Project.

TikTok removed 104m from its platform globally in the first half of this year for violating its terms of service, it said in a transparency report released on Monday. Over 90 per cent of these videos were removed before they were viewed, the company said. ®

More about

More about

More about


Send us news

Other stories you might like