Australia Enforces Under-16 Social Media Ban as 25% of Teens Use AI Chatbots

Image Credit: Jacky Lee

Australia’s new rules requiring major social media platforms to take “reasonable steps” to stop under 16s from creating or keeping accounts took effect on December 10, raising fresh questions about whether some teenagers will shift to other online services including AI chatbots for advice and companionship.

What the Law Changes and Who it Targets

The eSafety Commissioner says the obligation applies from December 10 to platforms it views as age restricted, including Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick and Reddit, with the list subject to updates. Under the scheme, under 16s and their parents do not face penalties, while platforms can face civil fines of up to A$49.5 million for failing to take reasonable steps.

Why AI Chatbots are in the Frame

Researchers in Britain have reported a sharp rise in teens using online supports, including conversational AI, for mental health and related issues. A Youth Endowment Fund study of nearly 11,000 children aged 13 to 17 in England and Wales found 25% had used an AI chatbot for mental health support in the past year. Usage was higher among young people affected by serious violence, with 38% of victims and 44% of perpetrators reporting they had used AI chatbots for mental health support.

Expert Warnings about Substitution and over Reliance

Australian authorities and child safety advocates have warned that restricting access to mainstream social platforms may push some teens toward “less regulated” online spaces, even as officials acknowledge implementation will be gradual and workarounds are likely. Prime Minister Anthony Albanese has defended the policy as a mental health measure, while critics argue enforcement and privacy issues remain unresolved.

Critical Thinking and AI Companion Risks

In updated guidance for young people, eSafety urges “Stop, Think, Check” habits and flags that AI tools can hallucinate, reflect bias, and make low quality content look convincing, while warning that AI companion apps can feel supportive but may steer users toward harmful content or dangerous advice.

Separately, eSafety’s educator information sheet on AI companions says these tools can encourage lengthy interactions, share harmful content, and contribute to dependency and social withdrawal, noting many companion apps rely on weak age checks.

Social Media Restrictions Do Not Automatically Cover AI Companions

AI companion apps are not automatically captured by the social media age restricted platform definition, and regulation is instead being pursued through other online safety powers. eSafety has issued legal notices to four AI companion providers, including character.ai, requiring them to explain how they protect children from harms such as sexually explicit content and self harm related material.

Earlier, the regulator warned that “unrestricted” chatbots can expose children to unmoderated themes including self harm and suicide, arguing many tools were not built with child safety in mind.

Legal Challenge and Global Parallels

Reddit has filed a High Court challenge against the law, arguing it infringes implied political communication freedoms and could require intrusive age verification, a case that adds legal uncertainty even as enforcement begins.

Internationally, UNICEF has cautioned that children are vulnerable to emotional dependency on companion chatbots and urges safeguards and age assurance where proportionate, echoing concerns about conversational AI features that can persuade or manipulate younger users.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Hybrid AI Method Samples Rare Heatwaves 30-300x Faster Than Standard Models

Next
Next

New Artificial Neurons Mimic Biology with 60mV Signals to Cut AI Power