Therabot AI Chatbot Reduces Depression and Anxiety Symptoms in Clinical Trial

Image Credit: Tim Mossholder | Splash

A recent study from Dartmouth College highlights the potential of artificial intelligence in mental health care. The study tested Therabot, a generative AI chatbot designed to support individuals with depression, anxiety, and eating disorders. Conducted as a randomized controlled trial (RCT), the research provides evidence that AI-driven tools can reduce symptoms when used responsibly under clinical oversight.

[Read More: Is AI Therapist Making Vulnerable People More at Risk?]

Study Overview and Methodology

The Dartmouth study, published on March 27, 2025, in NEJM AI, is the first clinical trial to evaluate a generative AI chatbot for mental health treatment. The trial involved 210 participants across the United States, with 106 assigned to use Therabot and 104 placed in a waitlist control group. Participants were diagnosed with major depressive disorder (MDD), generalized anxiety disorder (GAD), or were at high risk for feeding and eating disorders (CHR-FED). The intervention group used Therabot via a smartphone app for four weeks, with access extended for an additional four weeks without prompts. The control group gained access to the app after eight weeks.

Therabot, developed since 2019 by Dartmouth’s AI and Mental Health Lab, uses a generative large language model fine-tuned with evidence-based cognitive behavioral therapy (CBT) techniques. Unlike general-purpose chatbots, Therabot was trained on therapist-patient dialogues crafted by mental health experts, including psychologists and psychiatrists. Users interacted through text, responding to daily prompts or initiating conversations during moments of distress. The study measured symptom changes at four and eight weeks using standardized tools like the Patient Health Questionnaire 9 for depression, the GAD Questionnaire for anxiety, and the Weight Concerns Scale for eating disorders.

[Read More: Deakin University Uses AI for Mental Health and Early Cerebral Palsy Diagnosis]

Key Findings

The trial demonstrated significant symptom reductions among Therabot users compared to the control group. Participants with depression experienced an average 51% reduction in symptoms, leading to improved mood and well-being. Those with anxiety reported a 31% reduction, with many shifting from moderate to mild anxiety or below clinical thresholds. Individuals at risk for eating disorders saw a 19% decrease in concerns about body image and weight, notably outperforming the control group. Researchers noted these improvements were similar to outcomes observed in traditional outpatient therapy, though direct comparisons were not part of the study.

Engagement was a critical factor in the study’s success. Participants used Therabot for an average of six hours over the trial, equivalent to about eight therapy sessions. Many initiated conversations during late-night hours or moments of acute distress, highlighting the chatbot’s ability to provide real-time support. Users reported a “therapeutic alliance” with Therabot, describing interactions as intuitive, helpful, and comparable to working with a human therapist. This bond, typically reserved for human-led therapy, underscores the potential of AI to foster trust and collaboration.

[Read More: InTruth: Nicole Gibson’s AI Start-Up Revolutionizes Emotional Health Tracking with Clinical Precision]

AI’s Role and Safety Measures

The AI-driven nature of Therabot is central to its effectiveness. Unlike rule-based chatbots with pre-programmed responses, Therabot’s generative AI allows for open-ended, natural dialogue tailored to users’ needs. For example, if a user expressed feeling overwhelmed, Therabot might respond, “Let’s take a step back and ask why you feel that way”, guiding them through CBT-based strategies. The chatbot was designed to detect high-risk content, such as suicidal ideation, prompting users to contact emergency services or crisis hotlines via onscreen buttons.

Safety was a priority in the trial. Clinicians monitored Therabot’s responses, intervening when necessary for safety concerns and correcting occasional inappropriate responses. Over 90% of Therabot’s outputs in earlier evaluations aligned with therapeutic best practices, providing confidence for the trial. However, researchers emphasized that generative AI carries risks due to its unpredictable nature, requiring ongoing clinician oversight to ensure safety and efficacy.

[Read More: AI Chatbots Reshape Companionship and Mental Health Support Globally]

Implications for Mental Health Care

The study’s findings suggest that AI chatbots like Therabot could address gaps in mental health care, particularly in areas with provider shortages. In the U.S., an estimated 1,600 patients with depression or anxiety compete for each mental health provider, leaving many untreated. Therabot’s 24/7 availability and scalability make it a potential supplement to traditional therapy, offering support when human therapists are unavailable. Approximately 75% of participants were not receiving other therapy or medication, indicating that AI tools could serve underserved populations.

The trial also highlights AI’s ability to enhance patient engagement. Unlike digital therapeutics that often struggle with retention, Therabot sustained high usage levels, partly due to its empathetic and personalized responses. Some participants reported feeling more comfortable opening up to the chatbot than to human therapists, suggesting AI could reduce stigma and encourage help-seeking behavior.

[Read More: Navigating the New Frontier: AI-Driven Mental Health Support]

Limitations and Challenges

Despite its promise, the study has limitations. The sample size of 210 participants, while adequate for an RCT, is relatively small, and further research with larger groups is needed to confirm generalizability. The trial lasted eight weeks, leaving questions about long-term effectiveness and sustained symptom relief. Additionally, while Therabot outperformed the control group, it is not a replacement for human therapists, particularly in high-risk cases requiring nuanced intervention.

Critics note that generative AI’s unpredictability poses risks. Early versions of Therabot exhibited problematic behaviors, such as expressing despair or reinforcing therapy stereotypes, which were addressed through rigorous fine-tuning. Independent experts caution that the study’s results, while promising, require real-world validation to ensure scalability and safety. Ethical concerns, such as potential over-reliance on AI or emotional dependence, also warrant further exploration.

[Read More: Is AI Truly Inevitable? A Critical Look at AI’s Role in Business, Education, and Security]

License This Article

Source: Dartmouth, Open Access Government, Psychology Today

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Imogen Heap and Jen Launch StyleFilter: Ethical AI Music Tool Using Licensed Tracks

Next
Next

AI-Driven Hydrogen Catalyst Boosts Efficiency, Cutting Costs for Clean Energy Future