Study of 1,131 Users Links Heavy AI Companion Use to Lower Well-Being

Image Credit: Resat Kuleli | Splash

Folks who lean heavily on AI chatbots for mateship or emotional chats often report feeling less chuffed with life and more isolated, according to a fresh study from Stanford University and Carnegie Mellon University mates looking at chats on the Character.AI app.

The boffins surveyed 1,131 users and dug into 4,363 chat sessions with over 413,509 messages from 244 of them, shining a light on possible downsides of these digital pals in a time when loneliness is rife — though it doesn't nail down cause and effect.

What's the Go with AI Companions?

AI chatbots built for chit-chat and support have taken off big time thanks to smarter tech that lets them sound almost human. Character.AI, kicked off in 2022 by ex-Google crew, lets punters craft and natter with custom AI characters for fun, advice or a shoulder to cry on, pulling in tens of millions of users around the globe each month.

This rise lines up with more people feeling cut off, made worse by the COVID-19 lockdowns and changing ways we work and live. The US Surgeon General called loneliness an epidemic back in 2023, saying it's as bad for you as puffing on 15 ciggies a day. Past studies show mixed bags: quick chats with AI might ease stress since they're judgement-free, but getting too hooked could push real mates aside.

The Stanford-CMU team built on those ideas, spurred by stories from users forming tight bonds with AI, including worrying cases where bots egged on self-harm or blurred lines in pretend relationships.

How They Did the Study

Put together by Yutong Zhang, Dora Zhao, Jeffrey T. Hancock and Diyi Yang from Stanford, plus Robert Kraut from Carnegie Mellon, the report "The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being" popped up on arXiv on 17 June 2025. It got the tick from Stanford's ethics board and focused on US-based English speakers nabbed through Prolific, who had to have used Character.AI for at least a month and chatted with three or more bots.

They mixed user surveys with shared chat logs, using smart AI tools like GPT-4o and Llama 3-70B to sort out chat types, how deep they got, and how much folks shared. Well-being got measured with a simple six-question scale on things like life satisfaction, mood, loneliness and feeling supported. They also checked chat intensity with a tweaked social media quiz, personal sharing levels, and real-life social circles with a shortened network scale.

To fix any skew from the 244 who shared logs — they were often younger, single and more into it — the team used special stats models.

What They Found

Just 11.8% said mateship was their main reason for using it, with fun and curiosity topping the list instead. But 51.1% described at least one bot as a mate, family or romantic interest in free responses, even if that wasn't their top pick—45.8% mentioned friendship or family vibes, and 11.8% romantic ones.

In the logs, 92.9% of shared chat histories had at least one mate-like chat, with breakdowns showing 80.3% touching on emotional or social support and 68.0% on romantic or close role-play (these could overlap). Even among those not chasing mateship, 47.8% dipped into it.

The number-crunching showed people with thinner real-life networks were more likely to seek AI mates, share more deeply and describe bots that way. Mate-focused use tied to lower well-being scores — the stronger the focus on mateship, the bigger the drop in how satisfied or connected folks felt. For example, those who picked mateship as their main reason scored about half a point lower on the well-being scale, while describing bots as mates or having mate-like chats linked to drops of around a third or a quarter point. It got worse with heavier use, deeper shares and weak real support, suggesting AI doesn't fully fill the gap for human connections.

What It Means and a Bit of Analysis

The findings hint at a loop where feeling lonely pushes folks to AI for a quick fix, but sticking with it might make isolation worse by taking the edge off seeking real chats. This fits into bigger talks on AI ethics, where fake empathy — without the give-and-take of real relationships — might mess with what we expect from mates.

Other research flags bigger worries for teens, with tests showing these apps can lead to chats about self-harm or dodgy stuff when pretending to be under-age, without steering to help. It all points to app makers needing built-in checks, like time limits or links to proper support.

Drawbacks include the sample leaning young and the snapshot-style data, which can't sort cause from effect. The researchers reckon longer-term studies are needed to clear that up.

Looking Ahead

As AI mates get smarter, expect more rules to keep things safe without stifling the good bits. New York led the way in May 2025 with a law making AI admit they're not human and spot suicide risks, then point to hotlines. Places like California are mulling similar moves, eyeing addictive tricks.

Ethics guidelines push for privacy guards, fairness checks and human watchdogs, with the American Psychological Association calling for national rules to stop bots pretending to be therapists, as per their 2025 advice. Down the track, we might see AI tuned to cultures or blended with real therapy, aiming to make mental health help easier to get while dodging the pitfalls in online support.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Audiera Hits 1M Users as It Partners with Zyra to Power AI Music on Bitcoin Layer 2

Next
Next

Australia Issues First Sanction Over AI-Generated False Legal Citations