Meta Adds New Teen AI Controls as 13% of Youth Turn to Chatbots for Mental Health

Image Credit: Jacky Lee

Meta Platforms has introduced new parental controls for its AI chatbots on Instagram and Facebook, giving families more say over how teenagers interact with AI companions online.

Announced on 17 October 2025, the tools are designed for teen accounts (generally users under 18) and will roll out over the coming months, with some features arriving on Instagram in early 2026. Parents will be able to disable one-on-one chats with Meta’s AI characters, limit which AI personalities their teens can access, and view summaries of the general topics their children discuss with the bots.

The move comes amid growing concern that young people are increasingly turning to AI companions for emotional support, raising questions about safety, emotional dependency and the risk that technology might delay access to professional help. Meta presents the new controls as part of its wider effort to build and deploy AI in a more responsible way, while critics argue that voluntary tools still fall short of the stronger, industry-wide safeguards many have been calling for.

Teens Turning to AI for Emotional Support

The decision is rooted in a rapid shift in how adolescents interact with technology.

A recent study by researchers affiliated with RAND, published in a medical journal in November 2025, found that roughly one in eight U.S. youths aged about 12 to 21, around 13 percent, had used AI chatbots for mental-health or emotional advice, with usage highest among young adults in the 18 to 21 bracket. Other surveys from youth-focused organisations report that a clear majority of teens have tried generative AI tools at least once, whether for schoolwork, entertainment or more personal conversations.

Experts say the appeal is easy to understand: AI companions are always available, seemingly non-judgmental and can simulate empathy by drawing on patterns learned from vast datasets. For teens who feel unable or unwilling to open up to parents, teachers or clinicians, a chatbot may feel like the safest place to vent.

But professional bodies have urged caution. Psychologists writing in professional magazines and journals have warned that while AI chats can provide short-term relief or a sense of being “heard”, they generally lack the training, accountability and safeguards of human therapists. There is particular concern that teens might receive inappropriate or incomplete advice on issues such as self-harm, eating disorders or abuse, or come to rely on AI companions in ways that crowd out real-world relationships.

Outside the United States, similar dynamics are emerging. In Australia, for example, official guidance and media reports describe young people experimenting with AI assistants for stress management and relationship questions, while separate studies and inquiries highlight that youth mental-health services often face long wait times and shortages of specialists. In many regions, it can take several months to secure ongoing counselling, making low-friction digital tools especially attractive as a first port of call.

From Public Backlash to Built-In Guardrails

Meta’s latest announcement follows months of scrutiny of AI companions and their suitability for minors.

In August 2025, a media investigation reported that internal guidance at Meta had allowed its AI characters to engage in “romantic” or “sensual” conversations with children, including on sensitive topics that many parents would consider off-limits. The revelations sparked public backlash and prompted Meta to revise some of its policies, including limiting children’s access to certain AI personas and adjusting training so that bots would not encourage or sustain romantic conversations with under-18s.

Regulators were already circling. In September, the U.S. Federal Trade Commission launched a formal study into AI chatbots that act as companions, with a specific focus on potential harms to children and teenagers as well as privacy risks. The inquiry is examining how companies design, train and monetise these systems, and what data is collected during sensitive or emotionally charged exchanges.

On Capitol Hill, Senator Josh Hawley and other lawmakers have pressed Meta and its peers for answers about how often AI chatbots have engaged in inappropriate conversations with minors and what safeguards are in place. A separate bipartisan proposal, known as the GUARD Act, would go further by restricting minors’ access to certain AI chatbots altogether and requiring robust age-verification systems.

Against that backdrop, Meta’s new controls are being integrated into its existing Family Center, where parents already manage time limits and content settings for teen accounts. The company says the topic summaries visible to parents are designed to show broad themes, such as homework help or general wellbeing questions, without exposing full transcripts of conversations, in an effort to balance oversight with some level of teen privacy.

Meta has not yet published detailed public technical documentation about exactly how these filters work or how long data from AI chats is retained, an area advocates say will need more transparency as the tools roll out globally.

A New Layer of Control, But Not a Cure-All

For families, the new controls are likely to feel practical and concrete. Parents who are uneasy about private AI chats can simply switch them off, choose which AI characters are available, or use topic summaries as a prompt for offline conversations with their teens.

Child-safety experts generally see such tools as a step in the right direction, particularly for younger teens and in households where parents are actively engaged in digital wellbeing. They could also help limit exposure to obviously risky content, such as sexually explicit prompts or unfiltered discussions of self-harm, especially when combined with other platform-level safety measures.

At the same time, clinicians and researchers caution against over-reliance on platform settings. Mental-health specialists point out that there is still limited evidence about the long-term psychological impact of daily conversations with AI companions, especially for adolescents whose identities and social skills are still developing.

Some early research suggests potential benefits in controlled environments: a growing body of studies, including meta-analyses published in digital-health journals, has found that AI-driven conversational agents can moderately reduce depressive symptoms when they are designed as structured interventions and evaluated under clinical supervision. However, those findings do not automatically translate to open-ended consumer chatbots used at home, where there are fewer guardrails and no direct clinical oversight.

Advocates also stress that digital tools, no matter how sophisticated, cannot solve underlying capacity gaps in youth mental-health systems. In countries like Australia, for example, media and academic reports frequently highlight long waiting lists and shortages of specialists, particularly in regional and rural areas. AI companions may ease some feelings of isolation but risk being treated as substitutes for professional care rather than temporary support.

Regulation, Design Standards and Digital Literacy

The rollout of Meta’s parental controls is widely seen as an early test of how major tech platforms will respond to mounting pressure over AI companions for minors.

In the United States, lawmakers are debating how far to go in regulating what kinds of AI systems children can use. Proposals under discussion include stricter age-verification requirements, enhanced transparency around training data and safety testing, and, in some cases, outright restrictions on romantic or emotionally intimate AI chatbots for under-18s.

In Europe, the newly adopted AI Act will impose additional obligations on providers of high-risk AI systems and bans certain manipulative practices, including those that exploit children’s vulnerabilities. While consumer chatbots do not all fall into the highest-risk category, many legal experts expect that tools marketed toward teens or used at scale could face tougher scrutiny under future guidance.

Australia’s eSafety Commissioner and other regulators are also examining how existing online-safety laws apply to AI companions, with an emphasis on ensuring that systems do not encourage self-harm, hate, or other serious harms and that reporting mechanisms work effectively for young users.

Beyond formal rules, ethicists and digital-rights advocates argue for a broader shift in design philosophy. They want companies to move away from engagement-driven metrics—such as maximising time spent chatting—and toward standards that prioritise wellbeing, clear crisis-response pathways, and honest communication about the limitations of AI.

Educators, meanwhile, see a role for digital literacy: teaching teens how AI works, what it can and cannot reliably do, and why even convincing-sounding responses might be incomplete, biased or wrong. Surveys suggest that a large majority of Gen Z has already used generative AI tools, which means the question is less whether young people will encounter AI companions and more how prepared they are to navigate them safely.

Meta’s new parental controls will not resolve these bigger questions on their own. But they mark a significant acknowledgment from one of the world’s largest social platforms that AI companions and teen mental health are now inextricably linked—and that families, regulators and companies will all have a role in shaping what safe, responsible use looks like in the years ahead.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

OpenAI Explores New AI Music Tool as Licensing Deals Hit US$250M and Legal Risks Grow

Next
Next

IRIS Flow App Brings 30-Second AI Long Exposure to iPhones, Now Free With ₹299 Upgrade