This article is more than 1 year old
TikTok under investigation in US over harms to children
Probe builds on one already in process for Instagram
Reports that ByteDance-owned social media platform TikTok is harmful to children are under investigation by a number of US attorneys general.
The investigation was announced on Wednesday by a bipartisan coalition of state attorneys general including those for California, Florida, Kentucky, Massachusetts, Nebraska, New Jersey, Tennessee, and Vermont.
The concern is that the algorithm, which determines what content others see, sends users, youths in particular, down an addictive and harmful rabbit hole.
"Our children are growing up in the age of social media – and many feel like they need to measure up to the filtered versions of reality that they see on their screens," said California attorney general Rob Bonta. "We know this takes a devastating toll on children's mental health and well-being. But we don't know what social media companies knew about these harms and when."
Or as Massachusetts attorney general Maura Healey put it, the coalition is "examining whether the company violated state consumer protection laws and put the public at risk."
The offices of both Bonta and Healey said the investigation focuses on TikTok's techniques for boosting user engagement in youths, including strategies employed to increase the duration of time spent on the platform.
The investigation builds on another launched by the coalition last November into harms caused by Meta's Instagram after reports surfaced of internal research that said its use was associated with increased physical and mental health harms.
This week US president Joe Biden called for a ban on social networks serving ads targeted at children in his State of the Union speech.
"It's time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children," said Biden as Facebook whistleblower Frances Haugen sat in the audience.
Dangers associated with TikTok go beyond just internet addiction and depression, critics say, users on the site have been known to promote misinformation and extremist indoctrination.
- Siri: Who's first to offer voice search on Chinese TikTok?
- UK.gov threatens to make adults give credit card details for access to Facebook or TikTok
- Grab some tissues: Meta's share price tanks after Facebook emits latest figures
- TikTok tops Google to win Cloudflare's 2021 traffic ratings
Researchers Abbie Richards and Olivia Little found that it took 400 TikTok videos before hateful content was introduced to a viewer following an interaction with transphobic content. While 400 video sounds like a significant amount, the brevity of TikTok videos means this can be achieved in just two hours.
"A user could feasibly download the app at breakfast and be fed overtly white supremacist and neo-Nazi content before lunch," wrote Little and Richards.
TikTok took to their website in 2020 to give some information on how it recommends videos but revealed very little about the company's algorithm. It did say that recommendations are based on user interactions, video information like caption, sounds and hashtags, as well as device and account settings.
The Register asked ByteDance to comment. A statement TikTok has given to several media outlets states:
We care deeply about building an experience that helps to protect and support the wellbeing of our community, and appreciate that the state attorneys general are focusing on the safety of younger users.
We look forward to providing information on the many safety and privacy protections we have for teens.
A report from app analytics platform data.ai last November predicted TikTok would surpass 1.5 billion active users in 2022. ByteDance total revenue grew 70 per cent year-on-year to around $58bn in 2021. ®