OpenAI Weighs Health Assistant for 800 Million ChatGPT Users

AI-generated Image for Illustration Only (Credit: Jacky Lee)

OpenAI, the creator of ChatGPT, is weighing a push into consumer health products – including a generative-AI-powered personal health assistant – as it looks to move beyond core AI infrastructure and into industry-specific applications such as medicine, according to people familiar with the company’s plans.

Business Insider reported on November 10 that OpenAI is exploring several consumer-facing health tools, from a personal health assistant to a service that could aggregate medical data. The company has not announced a product or timeline, and declined to comment on the report, but the discussions highlight how it might try to turn ChatGPT’s heavy health-related usage into structured offerings in a space where Google, Amazon and Microsoft have all struggled.

Development Roots In User Demand

OpenAI’s interest in health builds directly on how people already use ChatGPT. At the HLTH conference in October, Nate Gross – co-founder of physician network Doximity and now OpenAI’s head of healthcare strategy – said ChatGPT attracts about 800 million weekly active users, with many seeking explanations of symptoms, treatments and test results.

Gross joined OpenAI in June 2025, followed in August by former Instagram executive Ashley Alexander, who became vice president of health products. The hires signal that OpenAI is building a dedicated health organisation rather than treating health as a side effect of general-purpose models.

On the technical side, OpenAI has been positioning its latest generation of models, including GPT-5 and health-tuned variants of GPT-4-class systems, as capable of handling complex multi-turn reasoning in domains such as medicine. Industry coverage notes that OpenAI now regularly highlights health as a top use case for ChatGPT in consumer settings.

In May 2025, the company also unveiled HealthBench, an open benchmark of 5,000 multi-turn health conversations, each graded against detailed rubrics written by hundreds of physicians to assess accuracy, safety and communication quality. HealthBench does not constitute a product, but it shows OpenAI investing in evaluation infrastructure specific to healthcare, including tests of how models handle uncertainty, request missing information and avoid unsafe recommendations.

OpenAI’s health work still sits within strict usage limits. Updated terms published in late October emphasise that ChatGPT can help users understand health information and prepare for clinical visits, but it is not a substitute for licensed medical advice or a regulated medical device.

What A Health Assistant Might Do

People familiar with the discussions say OpenAI is considering two main consumer concepts:

  1. A generative-AI personal health assistant that answers health questions, explains lab results in plain language and helps users prepare for clinician visits.

  2. A health data aggregator that could pull records from multiple providers and services into a single view, potentially via intermediaries such as health-data networks rather than direct hospital integrations.

No such products are live today. OpenAI currently does not let consumers upload full personal medical records into ChatGPT, and any record-aggregation service remains at the exploration stage.

If built, a personal health assistant would likely rely on OpenAI’s large language models to:

  • Summarise and explain medical text in accessible language;

  • Combine structured data (such as lab values) with unstructured notes and, potentially, wearable data;

  • Prompt users with follow-up questions when information is missing;

  • Flag red-flag symptoms that should be escalated to emergency or in-person care, consistent with HealthBench-style safety rubrics.

External research offers a glimpse of the kind of evidence a future assistant might draw on. A 2021 cohort study in JAMA Network Open by Paluch and colleagues found that middle-aged adults who took approximately 7,000 steps per day or more had about a 50%–70% lower risk of all-cause mortality than those taking fewer than 7,000 daily steps. While such findings are population-level associations rather than personalised prescriptions, they illustrate how step-count data from wearables could eventually be translated into preventive guidance – if systems are designed to communicate the limits of the evidence clearly.

Existing Healthcare Partnerships

Even before any consumer app, OpenAI models are already embedded in clinician-facing tools:

  • Color Health uses GPT-4-class models to power a “cancer copilot” that integrates patient records and guidelines to suggest missing diagnostics and tailored screening or work-up plans for doctors to review.

  • Eli Lilly has partnered with OpenAI to apply generative AI to the discovery of novel antimicrobials targeting drug-resistant pathogens, reflecting Big Pharma’s growing interest in model-driven research.

These projects remain clinician-supervised and operate under HIPAA-aligned controls. They point to OpenAI’s likely playbook: focus first on tools that augment professional decision-making, then adapt lessons – cautiously – to consumer contexts.

From Chat to Health Infrastructure

Analysts see several drivers behind OpenAI’s health push:

  • High latent demand. Reporting from HLTH and subsequent analysis suggest that a large share of ChatGPT’s hundreds of millions of weekly users seek explanations of symptoms, lab results and treatment options, turning the chatbot into an informal “first opinion” even though OpenAI warns against using it as a doctor.

  • Fragmented data. Previous personal health record efforts faltered partly because they offered static storage without much utility. The vision now under discussion is a system where an AI assistant actively works over a user’s “living profile”, consolidating data from labs, notes and devices to generate insights, not just store information.

  • Hardware ambitions. OpenAI’s USD 6.5 billion acquisition of Jony Ive’s AI hardware startup IO Products in May 2025 gives it a dedicated hardware team exploring new device form factors for AI interactions. While there is no public indication that a health assistant would run on that device, the deal underlines a broader push toward consumer-facing products that blend software and hardware.

At the same time, OpenAI’s updated usage policies and the design of HealthBench indicate that the company is acutely aware of the risks of over-confident AI in clinical settings, especially hallucinated facts or unsafe advice.

Lessons From Big Tech’s Earlier Health Record Bets

OpenAI’s plans inevitably invite comparison with earlier, often unsuccessful attempts to centralise consumer health data:

  • Google Health shut down its personal health record service in 2011 due to low user uptake and integration challenges.

  • Microsoft HealthVault, launched in 2007, was discontinued in 2019 after failing to gain broad adoption.

  • Amazon Halo, a wearable and app that tracked activity, sleep and body composition, was wound down in 2023 amid privacy concerns and limited traction.

These examples show that technical execution is not enough; consumers must trust companies to handle highly sensitive data and must see clear day-to-day value from centralising their information.

In parallel, other players are already blending AI with health data in narrower ways:

  • WHOOP Coach, launched in 2023, uses OpenAI’s GPT-4 to turn WHOOP strap data on sleep and strain into conversational coaching.

  • Google DeepMind’s AMIE is a research-grade conversational model for clinical consultations, evaluated in studies against primary-care physicians but currently aimed at professional, not consumer, scenarios.

OpenAI’s core advantage is its general-purpose conversational capability and enormous installed base. Its challenge is to convert that into clinically robust, regulated workflows that avoid the pitfalls that felled earlier Big Tech initiatives.

Regulation, Privacy and Trust

Any OpenAI consumer health offering that handles protected health information for US users would need to operate within the HIPAA framework as a “business associate” – or work through intermediaries that already have those relationships with providers.

OpenAI has begun emphasising HIPAA-aligned controls in its enterprise health collaborations, such as its work with Color Health, where the company says data is processed under strict privacy and security standards and not used to train OpenAI’s general models.

Public trust remains fragile. Multiple surveys in the US and Europe over the past two years have found that many patients are wary of AI making or heavily influencing medical decisions, even as they welcome AI-driven administrative efficiencies or better explanations of information. beckershospitalreview.com+1 Advocates argue that transparent evaluation tools like HealthBench, plus clear commitments not to repurpose identifiable health data for model training, will be essential if OpenAI is to avoid repeating the missteps of prior health record projects.

Potential Impact

If OpenAI does move forward with a consumer health assistant and data tools, the effects could be felt at several levels:

  • Patients and carers. A reliable assistant that explains diagnoses, side-effects and follow-up steps in everyday language could help people make better use of clinical visits, particularly for complex conditions. Early experiments by other firms suggest AI can already summarise visits and help patients prepare question lists for upcoming appointments.

  • Clinicians and health systems. Ambient scribe tools and AI copilots for documentation have shown they can reduce the burden of note-taking and coding, freeing up clinician time. OpenAI’s models are already embedded in some of these workflows via partner products.

  • Equity. A sophisticated health assistant could improve access to understandable health information for people who lack regular primary care – but only if it is available across languages, income levels and connectivity contexts, and designed with clear guardrails for low-resource settings. Analyses of HealthBench and similar benchmarks have already warned that Western-centric assumptions can create cultural mismatches, underlining the need for localisation and inclusive design.

Key questions remain unresolved: Will OpenAI position any health assistant as a standalone app, as a mode within ChatGPT, or as something tightly integrated with future hardware? How will it separate consumer products from regulated medical devices in markets such as the US and EU? And can it demonstrate, with independent evidence, that such tools improve outcomes rather than simply adding another layer of complexity?

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

AI Cuts Mammography Workload 44%, Boosts Cancer Detection 29%

Next
Next

XRPH AI Launches Vitals Dashboard for Africa’s 680M Mobile Users