Grok Misinformation Around The Bondi Beach Attack Shows How Fast AI Can Mislead During Breaking News

Image Credit: Salvador Rios | Splash

What Happened

NSW Police say two men opened fire into crowds at Bondi Beach on Sunday 14 December 2025. In their first public update on the night, police warned casualty numbers were expected to change and confirmed ten deaths at that point.

In a later NSW Police update, police said 14 people died at the scene and a further two people later died in hospital, bringing the total deaths to 16. Police also said 42 people including four children were taken to hospitals across Sydney, and that one alleged shooter died at the scene while the other was taken to hospital in a critical condition.

Reuters later reported police allegations that the attackers killed 15 people at a Hanukkah celebration, with the older alleged gunman shot dead at the scene and the younger in critical condition.

What Grok Was Reported to Have Got Wrong

In the hours after the attack, multiple outlets documented cases where Grok, xAI’s chatbot (available on X), produced incorrect answers when users asked it to identify people and verify footage connected to the incident.

Key examples reported include:

  • Misidentifying the bystander who disarmed a gunman: TechCrunch and The Verge reported Grok repeated a claim that a person named “Edward Crabtree” disarmed one of the gunmen. Reuters Fact Check reported this was false, and traced the “Edward Crabtree” claim to an article on a site using the domain thedailyaus.world (not the established Australian outlet with a similar name). Reuters said the domain was registered on 14 December 2025, and that the site could not be reached for comment.

  • Mislabeling images of the bystander as an unrelated hostage photo: Gizmodo and TechCrunch reported Grok identified an image of the injured bystander as an Israeli hostage.

  • Casting doubt on authentic footage: Gizmodo reported Grok told users a widely shared video of the disarming incident was an old unrelated viral clip and said the Bondi footage was “uncertain”.

  • Getting the location and context wrong: The Verge reported Grok incorrectly described Bondi Beach footage as being from elsewhere, and also repeated other mistaken context about the event.

Taken together, the pattern was not a single typo. It was an AI system delivering confident sounding claims that did not match what police, verified reporting, and subsequent fact checks supported.

How Grok’s Positioning Ddds to The Risk

xAI and X market Grok as a general purpose assistant for tasks like answering questions, and xAI promotes it as having strong real time search capability.

That positioning matters because during breaking news, people often treat chatbots like a shortcut to verification. If the tool answers quickly but loosely, it can scale confusion faster than traditional rumour spread, especially inside a high velocity social feed.

How Other AI Products Try to Reduce This Problem

Across the industry, one common response has been adding web connected answers with source links so readers can verify. For example:

  • OpenAI has described ChatGPT search as returning answers with links to sources.

  • Google’s Gemini API documentation describes “grounding” and returning “grounding citations” when using Google Search grounding.

  • Perplexity positions its Search API around ranked web results from a refreshed index.

This does not magically eliminate mistakes, but it shifts the user experience toward “show your working”, which is critical when accuracy matters.

Practical Takeaways for Readers

If you are using AI during fast moving news:

  1. Treat chatbots as a starting point, not a source.

  2. Check primary authorities first (police, emergency services, election commissions). NSW Police updates in this case show why early numbers can change.

  3. Look for verification language (named outlets, confirmed by police, verified video).

  4. Be extra cautious with identity claims about private individuals, because that is where harm escalates fastest.

  5. Prefer answers that cite sources you can open and read, not just confident summaries.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

GlobalBuildingAtlas Released: First 3D Map of 2.75 Billion Buildings

Next
Next

ChatGPT Adult Mode Arrives 2026: OpenAI’s Plan for Mature Content