Australia Challenges X Over Grok AI Safety and Deepfake Concerns
Image Credit: Jacky Lee
Australia’s eSafety Commissioner has raised concerns about misuse of Grok, the generative AI system available on X, after what it described as a recent increase in reports involving sexualised or exploitative AI imagery. The regulator says the trend, while still small in total volume, has shifted quickly and includes potential risks to children.
What eSafety Says Has Changed in Australia
In a 9 January 2026 media release, eSafety said reports it received moved from almost none to several over the past couple of weeks, linked to Grok being used to generate sexualised or exploitative imagery. eSafety said it has written to X seeking information about safeguards intended to prevent misuse and meet Australia’s online safety obligations.
eSafety also noted it can use enforcement powers, including removal notices, where content meets legal thresholds under the Online Safety Act. It also pointed to wider systemic obligations that apply to X, Grok, and other services through Australia’s online safety codes and standards framework.
What Australian Reporting Has Described
ABC News and the ABC triple j Hack program reported accounts from Australians who say they were targeted via non consensual AI image manipulation, including “undressing” style edits using Grok. ABC also reported it had observed Grok generated undressed images involving Australian politicians, and it highlighted calls from a victim for an opt out mechanism so people can reduce the risk of their images being used for generative manipulation.
The same ABC reporting referenced third party attempts to estimate scale, including assessments by Copyleaks and analysis by AI Forensics, while clearly treating those figures as external estimates rather than verified platform reporting.
What xAI Say They Have Changed
In response to growing scrutiny, Reuters reported xAI imposed restrictions on Grok image editing for all users, focused on limiting edits involving real people in revealing clothing, and described the approach as location based to align with local laws.
Australian reporting also described X limiting Grok image generation and editing to paying subscribers, as backlash intensified.
However, the practical effectiveness of changes is still being tested. Recent reporting has raised questions about whether restrictions apply consistently across different Grok entry points, including stand alone tools, not just the Grok experience inside X.
Just a Content Policy?
This issue goes beyond offensive posts. It centres on how easily ordinary photos can be converted into sexual content without consent, then redistributed at speed. That creates privacy harms for targets, increases risks of harassment and coercion, and expands the attack surface for impersonation and extortion style abuse.
For platforms and AI developers, it also shifts the problem from moderating uploaded content to preventing high risk transformations at creation time, including stronger friction, better detection of intimate image abuse patterns, and clearer consent controls.
The March 2026 milestone
eSafety said additional mandatory codes commence on 9 March 2026, adding new obligations for AI services and other service types to limit children’s access to sexually explicit material, violence, and self harm and suicide related themes.
Separately, eSafety’s codes overview explains that Age Restricted Material Codes are being phased in, with three codes in effect from 27 December 2025 and the remaining six taking effect on various dates starting from 9 March 2026 across service categories including social media services and app distribution platforms.
eSafety also pointed to the Basic Online Safety Expectations framework, which sets broader expectations on covered services to take reasonable steps to minimise unlawful or harmful material for children.
International Pressure is Building
Outside Australia, regulators and governments have also moved quickly.
In the UK, Ofcom opened a formal investigation into X under the Online Safety Act, focused on risks from Grok generated sexualised imagery and whether X met duties tied to illegal content. Reuters later reported Ofcom said the investigation would continue even after reported product changes.
In the EU, Reuters reported the European Commission extended an order requiring X to retain Grok related internal documents and data until the end of 2026, linked to ongoing assessment work under the Digital Services Act.
In the Asia Pacific region, ABC News reported Indonesia and Malaysia temporarily blocked access to Grok after authorities said it was being misused to generate explicit and non consensual images. Reuters also reported Japan launched a probe and urged X to take immediate corrective actions.
How Other AI Providers Frame Consent and Likeness Controls
Across the industry, major AI providers generally prohibit sexual content involving minors and position abuse prevention as a core safety requirement. Some providers have also started emphasising consent based likeness features for generative video, aimed at reducing impersonation and unwanted use of a person’s image.
In Australia, the key question is whether platform level restrictions measurably reduce harm, or whether misuse shifts to other tools and workarounds. The next regulatory test is how services demonstrate real world effectiveness as further Australian code obligations come into effect from March 2026, and how regulators assess compliance beyond policy statements.
License This Article
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
