Australia’s Under-16 Social Media Ban: AUD 49.5m Fines & 10 Key Apps Listed
Image Credit: Mariia Shalabaieva | Splash
Australia’s pioneering under-16 social media restrictions, enacted through the Online Safety Amendment (Social Media Minimum Age) Act 2024, represent a major shift in digital regulation. The law, passed by Parliament on 29 November 2024 and receiving Royal Assent in December 2024, amends the Online Safety Act 2021 by inserting a new Social Media Minimum Age (SMMA) framework in Part 4A.
The core obligation takes effect on 10 December 2025. From that date, providers of age-restricted social media platforms must take “reasonable steps” to prevent Australians under 16 from creating or keeping accounts. The scheme places the burden on platforms rather than families or young people.
Scope, Requirements and Penalties
How the scope is defined
The regime does not operate purely as a fixed government “designation list”. Instead, it relies on:
a statutory definition of “age-restricted social media platform,” and
Ministerial rule-making powers that can narrow or further target that definition.
To help industry and the public prepare, eSafety has published its view of which services are likely in scope. As of 21 November 2025, eSafety considers the following 10 services to be age-restricted: Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick and Reddit. eSafety also notes it does not have a formal role in “declaring” platforms; absent Ministerial Rules explicitly naming a service, ultimate legal determination may be a matter for the courts.
What is excluded
The Online Safety (Age-Restricted Social Media Platforms) Rules 2025 exclude categories of services such as messaging/voice/video calling services, online games, professional networking, and services that primarily support education or health. This is intended to balance child protection with access to essential communication and support functions.
Accounts vs browsing
The rule targets accounts rather than casual viewing. Under-16s will generally not be prevented from accessing content in a logged-out state, even on age-restricted platforms.
“Reasonable steps” and age assurance
While no single technical method is mandated, government guidance indicates platforms will likely need some form of age assurance. A key safeguard is that no Australian can be compelled to use government identification or Digital ID as the only way to prove age, and platforms must offer reasonable alternatives. The framework also requires that data collected for age assurance be ringfenced and destroyed after use, with privacy obligations enforced alongside the Privacy Act 1988.
Penalties and oversight
Corporate penalties for breaching the minimum age obligation can reach 150,000 penalty units, currently equivalent to about A$49.5 million. The law does not impose criminal penalties on young people or parents. The scheme is co-regulated by eSafety and the OAIC, with eSafety focused on platform compliance standards and the OAIC overseeing privacy provisions.
Early Implementation Signals from Platforms
With the deadline approaching, several companies have begun pre-compliance action:
Meta started disabling underage Facebook and Instagram accounts from 4 December 2025, with reporting estimating about 500,000 accounts impacted in the first wave. Affected users are prompted to download data and can appeal via age-verification pathways if wrongly flagged. Because Threads requires an Instagram account, it is also indirectly affected.
YouTube has confirmed it will comply. From 10 December, under-16 users are expected to be signed out and unable to use account features such as commenting or uploading, though they can still view content while logged out. The company has argued that forced logged-out use may reduce access to some safety or parental controls that operate when signed in.
Other major platforms including TikTok and Snapchat have indicated they will implement age-assurance measures to meet the “reasonable steps” requirement, while some services have been slower to publicly detail their exact approach.
Evidence Base and Policy Rationale
Government and regulator messaging has pointed to persistent risks for younger users. eSafety’s 2025 research on 10–15-year-olds found 96% had used at least one social media platform and roughly seven in ten reported encountering content associated with harm. The harms cited span hate or misogynistic material, violent content, dangerous challenges, and content promoting disordered eating or self-harm.
This corrects a common misreading of the data: 96% refers to use, not harmful exposure.
Safety, Rights and Equity
The imminent enforcement has reignited debate about proportionality and unintended consequences:
Protection vs participation: Supporters argue the rule is a necessary reset after years of concern about cyberbullying, addictive design, and exposure to harmful content. Critics say a broad access threshold may limit expression, social connection, and informal learning opportunities for younger teens.
Privacy risks: Even with statutory safeguards, civil liberties and privacy advocates remain concerned about age-estimation errors and the risks of expanded identity checks. The OAIC’s guidance emphasises that compliance will not be considered reasonable if platforms fail to meet privacy obligations and data-destruction requirements.
Equity concerns: Youth advocates warn that some under-16s, especially those in remote areas or marginalised communities, may rely on online spaces for identity support and peer connection. Regulators have indicated they will monitor how the ecosystem adapts once the rule is in force.
Shift to Smaller Services
Australian media reporting suggests some younger users are exploring alternatives. ABC News says services such as Lemon8 and Yope have been advised by eSafety to self-assess in light of the new definition and rules, with interest in alternatives rising as the deadline nears.
This underscores a key implementation challenge: tightening access on major platforms may push some teens toward smaller or evolving services that could be less mature in safety tooling.
What to Watch After 10 December
The framework requires a review within two years of effective commencement, giving the government a mechanism to refine the definition, assess the effectiveness of privacy protections, and respond to changing platform behaviour and technology.
Whether Australia’s approach becomes a durable global template will likely depend on three near-term outcomes:
how consistently platforms can implement age assurance without excessive data collection,
whether harmful exposure among younger teens measurably declines, and
whether migration to smaller services creates new safety gaps.
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
