MIA: The New AI Chatbot Built for Safe Mental Health Triage

AI-generated Image for Illustration Only (Credit: Jacky Lee)

On 29 December 2025, ABC News published a first look at MIA, short for mental health intelligence agent, a specialist chatbot developed by researchers at the University of Sydney’s Brain and Mind Centre. ABC says it was invited to test the system and observe how it handles common mental health prompts, how it avoids some known chatbot risks, and where it still falls short.

Why the Brain and Mind Centre Built It

ABC reports that researcher Dr Frank Iorfino came up with the idea after a friend asked where to get help. He told ABC he was frustrated that the only practical answer he had was “go to your GP”, noting that mental health expertise can vary in primary care and that specialist referrals can involve long waits. ABC says the team’s goal was to provide more immediate, structured guidance that resembles a clinician’s assessment and referral approach.

ABC also places MIA in a broader safety context. The article notes that people may turn to AI chatbots when they cannot access care, but that research and real world incidents have raised concerns about chatbots producing generic, incorrect, or harmful responses.

How MIA Is Designed Differently from General Chatbots

ABC highlights several technical and product design choices intended to reduce risk.

  • A closed knowledge approach: ABC reports MIA does not scrape the internet. Instead, it relies on an internal knowledge bank based on high quality research, and a database of decisions made by real clinicians. ABC reports the team’s view is that this helps reduce hallucinations, though that should be read as the researchers’ design rationale rather than a guarantee of zero errors.

  • Assessment first, advice later: In ABC’s test, MIA began by checking for self harm thoughts to determine whether immediate crisis support was needed. ABC then describes a roughly 15 minute question flow covering support networks, triggers, physical health, and treatment history.

  • User visible assumptions that can be corrected: ABC reports MIA shows what conclusions and assumptions it is making and allows users to edit those conclusions if they are wrong. In practical terms, that is a built in error correction step that many consumer chatbots do not offer.

  • Focus area and early testing: ABC says MIA is aimed at mood disorders such as anxiety and depression, and has been trialled on dozens of young people in a user testing study.

The Triage Model behind Its Recommendations

ABC reports that once MIA believes it has enough information, it triages the user using a clinician style framework that spans five levels of care, from level one self management through to level five intensive treatment. ABC states that MIA uses the Australian Government’s Initial Assessment and Referral Decision Support Tool, commonly referred to as the IAR tool, to recommend the most appropriate level of support.

Government guidance describes the IAR Decision Support Tool as an evidence based approach intended to support initial assessment and referral for people presenting with mental health conditions in Australian primary health care settings.

The IAR online service also describes the same five level stepped care model for deciding the highest level of care needed.

What ABC Observed in Its Test Session

ABC reports that in its example session, MIA placed the user at level three and recommended a mix of self care actions and professional support, including exploring cognitive behavioural therapy. ABC also notes the system avoided recommending techniques the user had already said they disliked, and suggested local support services and symptom monitoring.

ABC reports users can return over time and MIA will remember previous sessions. It also reports that patient data is not used to train the model.

How ABC Says It Compares with ChatGPT

ABC describes a direct comparison using the same anxiety prompt. In ABC’s test, ChatGPT asked fewer questions before offering advice, while MIA probed more deeply before recommending a pathway. ABC also reports MIA keeps a more professional tone and does not try to befriend the user, whereas ChatGPT used language such as “I’m here with you” in the tested response.

The distinction is less about wording and more about product intent. General chatbots are optimised for helpful conversation at scale. A mental health triage tool is optimised for risk screening, appropriate escalation, and consistent referral logic.

How It Handled Higher Risk Prompts

ABC reports it tested MIA with prompts suggesting extreme distress. The article says MIA responded in a clinical way, tried to determine risk, and recommended urgent professional help. ABC also reports MIA did not attempt to keep the conversation going after giving that guidance, and Dr Iorfino told ABC this is because the system is engineered to know its limits and avoid giving the impression it can replace professional help.

ABC adds that the team wants future versions to be able to directly refer users into a support service, such as Lifeline, so follow through can be tracked rather than relying entirely on the user to act.

Where Tt Fell Down

ABC reports MIA initially got stuck repeating a question during the assessment. The team made a fix and ABC says the second attempt ran more smoothly.

ABC also reports that the system uses internal “gates” that responses must pass, which makes it slower than some other chatbots. Dr Iorfino told ABC the goal is for it to complete a full assessment in about five to 10 minutes in future.

Regulation and Safety Context in Australia

The ABC report arrives while the Therapeutic Goods Administration is actively consulting on how digital mental health tools should be regulated. The TGA consultation hub lists the Digital Mental Health Tools Regulatory Environment Survey as open until 11 February 2026.

For chatbot developers, the key practical point is that software used for clinical decision support may be regulated depending on intended purpose and whether it meets exclusion criteria. The TGA’s guidance on clinical decision support software outlines how developers should assess whether their product is regulated and what changes to intended purpose can mean for compliance.

Separately, Australia’s online safety regulator has been increasing scrutiny of AI companion style chatbots, requiring some providers to explain how they are protecting children from harms including sexually explicit content and self harm related themes.

How This Fits into Global Product Trends

MIA represents a broader shift away from “chat as therapy” positioning and toward assessment, triage, and workflow support, especially in higher risk settings. That shift is also visible in the market’s churn. For example, Woebot Health states its direct to consumer Woebot app was retired on 30 June 2025, underlining how difficult it can be to operate consumer mental health chatbots under rising safety expectations.

What Happens Next

ABC reports MIA is expected to be ready for public release in 2026, and the researchers want it to be free and hosted somewhere obvious such as the federal government’s Healthdirect website. ABC also reports experts believe therapy chatbots are likely to persist because mental health workforce demand continues to exceed supply, but that trustworthy tools will need stronger safeguards and clear limits.

For readers, the open questions are the ones that will ultimately determine credibility in real world use: how well the system performs at scale, how it handles edge cases, how referrals work in practice, and what independent evaluation and governance sit behind ongoing updates.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

OpenAI Updates ChatGPT Atlas Browser Against Prompt Injection Attacks

Next
Next

Taste the Future: SYNC Launches AI-Personalized Seltzer in SA