Australia not banning kids from YouTube – they’ll just have to use mum and dad’s logins

Regulator acknowledges that won’t stop video nasties, but welcomes extra ‘friction’

Australia’s cyber-safety regulator eSafety has advised its government that YouTube is as dangerous as other social networks, opening the door for the video-streaming site to be included in the Land Down Under’s plan to prevent Big Tech allowing kids under 16 from signing up for accounts.

Australia’s government plans to enact its plan from December 10th when, per the requirements of the Online Safety Amendment (Social Media Minimum Age) Act 2024, operators of some social media services “must take reasonable steps to prevent children who have not reached a minimum age from having accounts.”

The responsible minister can use the Act to designate social media services that must try to stop kids signing up for accounts. As of last November the government had named TikTok, Facebook, Snapchat, Reddit, Instagram, and X as certainties for regulation, but said that list of services represents a “minimum”.

The minister at the time, Michelle Rowland, promised not to use the Act’s powers to regulate “messaging services, online games, and services that significantly function to support the health and education of users.”

The government therefore planned to exempt YouTube from regulation, an omission that many found peculiar.

Australia’s May 2025 election saw its government re-elected. Prime Minister Anthony Albanese appointed a new minister to oversee the Act - Anika Wells MP – and she sought advice from eSafety about how the law could best be applied.

eSafety duly delivered the advice, which included a recommendation that YouTube be regulated under the Act.

Google objected strongly to eSafety’s advice, arguing that YouTube contains plenty of content that can support young users and that the law passed without mention that the vid-streaming site would be regulated.

eSafety fired back as follows:

YouTube currently employs many of the same features and functionality associated with the harms that the legislation is seeking to address. These include features such as autoplay, endless content and algorithmically recommended content. These features along with shortform video content may encourage excessive consumption without breaks and amplify exposure to harmful content.

eSafety Commissioner Julie Inman Grant later delivered a speech in which she reported new research that found 70 percent of children aged 10 to 15 reported encountering “content associated with harm, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating.”

Inman Grant also noted that YouTube has reportedly eased its content moderation efforts and cited such shifts as a reason to revisit which platforms the Act regulated. She also pointed out that AI applications are already putting kids at risk and suggested Australia will need to regulate them, too.

Inman Grant also reminded Australians that the Act means kids will still be able to use YouTube at home or school – so long as they sign in with an account established by an adult.

Which means kids can still encounter harmful content.

Inman Grant noted the Act could therefore be futile and admitted it “won’t solve everything” but “will create some friction in the system” that helps to reduce harms caused by online services.

The world is watching

The Commissioner also noted that Australia’s legislation has sparked debate on how to protect children online in other nations.

“And, I can assure you, they are all beating down our door to find out just how Australia plans to take this bold regulatory action forward,” she said in her speech.

Other nations may also be watching Australia’s efforts to determine how tech companies can detect the age of users who try to sign up for accounts, because as we reported last week trials have found the tech feasible but deeply flawed. ®

More about

TIP US OFF

Send us news


Other stories you might like