Grok ‘Undressing’ Images: Australia’s Legal Response to Deepfakes
Image Credit: Salvador Rios | Splash
In January 2026, reports about X’s Grok image tools being used to generate non consensual sexualised images pushed an uncomfortable question back into the spotlight: when generative AI can “undress” a real person from a single photo, what can Australian law actually do quickly, and where does it still struggle?
A recent explainer in the Law Society Journal used the Grok controversy as a case study to map the current legal toolkit, focusing on eSafety’s takedown powers under the Online Safety Act 2021 and the expanding set of criminal offences that now target some forms of sexually explicit deepfakes.
What Triggered the Legal Focus
ABC reporting in early January described how users were prompting Grok to generate sexualised and nude like images of women, and in some cases attempting prompts involving children. ABC also reported that Grok announced restrictions, including limits on editing images of real people in revealing clothing and making some image features available only to paid subscribers.
From a governance angle, this matters because it is not just “bad content”. It is a stress test for platform safeguards, complaint pathways, and cross border enforcement when the alleged misconduct can be generated and reposted at scale.
eSafety Takedowns Under the Online Safety Act
Australia’s most immediate regulatory mechanism is civil, not criminal: eSafety can require removal of image based abuse content under the Online Safety Act 2021 scheme.
eSafety’s regulatory guidance explains that a “removal notice” can require content to be taken down within 24 hours. If a notice is not complied with, civil penalties can apply, with the maximum penalty for an individual expressed as 500 penalty units (and higher for bodies corporate).
The practical benefit of this model is speed: it is designed to get harmful material down first, without waiting for a criminal brief to be prepared.
What eSafety is Signalling for Platforms: “Safety by Design” Obligations are Tightening
Separate to individual complaints, eSafety has been positioning the Grok issue as part of a broader compliance story for services that enable creation and sharing of AI generated content.
In a January 2026 media release, eSafety said it had written to X seeking information about safeguards, and pointed to incoming mandatory industry codes and standards aimed at preventing and addressing child sexual exploitation and abuse material, with a commencement date stated as 9 March 2026.
The direction of travel is clear: regulators are looking beyond single posts and towards whether platforms are engineering out predictable misuse in the first place.
Criminal Law is Catching up
Commonwealth: new offences explicitly cover tech altered sexual material
At the federal level, the Commonwealth moved in 2024 to modernise Criminal Code offences around non consensual sexual material shared using a carriage service. Parliament’s Bills Digest notes the new offence is intended to apply whether material is unaltered or “created or altered” using technology, which is the legal hook for deepfakes.
The Attorney General’s Department fact sheet for police also frames the 2024 reforms as strengthening offences for “creation and non consensual sharing” of sexually explicit material online, including deepfakes.
One important limitation remains: these Commonwealth offences largely focus on transmission over a carriage service. That means “creation” only becomes chargeable in many scenarios once it is linked to sharing or distribution, a gap that law reform bodies have debated since the Bill stage.
New South Wales: “production” style offences are expanding, including audio deepfakes
NSW is moving further into “creation” territory.
The NSW Government announced reforms in 2025 to strengthen offences around intimate images, including digitally created or altered material.
The Bill’s explanatory materials describe amendments that specifically address altered or fabricated intimate content and also extend to certain sexually explicit “audio material”, reflecting the rise of voice cloning and synthetic audio harms.
A NSW commencement proclamation sets a start date of 16 February 2026 for the relevant Act.
In practical terms, that is a major legislative signal: states are starting to treat deepfake intimate content less like “mere online offensiveness” and more like a distinct form of sexual abuse that warrants direct criminalisation.
The Hardest Part is Still Jurisdiction and Attribution
Even with stronger laws, the LSJ explainer highlights the same enforcement pain points that keep recurring in deepfake cases: identifying the user behind an account, gathering admissible evidence, and dealing with cross border actors and infrastructure.
This is also where civil takedown schemes and criminal offences intersect awkwardly. Takedowns can reduce ongoing harm quickly, but they do not always identify perpetrators. Criminal enforcement can punish offenders, but it often moves slower and faces bigger jurisdictional hurdles when content or suspects sit offshore.
International Pressure is Rising Too
ABC reported that overseas governments and regulators publicly criticised the Grok “undressing” use case, and that some countries moved to block or restrict access.
For Australia, the more immediate story is domestic: the combination of eSafety’s takedown framework, Commonwealth carriage service offences, and NSW’s upcoming deepfake focused amendments is turning “AI safety” from a broad principle into enforceable obligations and offences, with clearer expectations for platforms and clearer pathways for victims.
License This Article
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
