ACT Holds "Watching Brief" on AI Deepfake Election Laws for 2028
Image Credit: Marcus Reubenstein | Splash
The ACT Government says it is monitoring how AI generated deepfakes could affect election integrity in Canberra, but it is not ready to update ACT election laws right now. The issue has been discussed alongside evidence to the ACT Legislative Assembly inquiry into the operation of the 2024 ACT election and the Electoral Act 1992, with an eye on risks that could emerge before the 2028 ACT election.
The Assembly committee webpage shows the inquiry began on 4 December 2024, submissions closed on 30 September 2025, and the reporting date is still listed as TBC.
What The ACT Is (And Is Not) Committing To
In public comments reported by Region Canberra, ACT Attorney General Tara Cheyne indicated the government has not settled on a specific model yet, including whether to ban some AI uses, require disclosure of AI use in campaigning, or introduce stronger rules around consent. The same reporting frames the government’s current position as a watching brief, partly because the technology is moving quickly enough that rules written today could become outdated.
The same report notes evidence to the inquiry included the view that a blanket ban on generative AI in elections may not be desirable, because AI can also be used in more legitimate ways (for example for accessibility or lower cost communications), while still recognising the risk of deceptive synthetic media.
A National Security Issue?
Synthetic media is not just an elections admin problem. It overlaps with information operations and social cohesion, which is why it shows up in national security focused threat framing.
ASIO’s Annual Threat Assessment 2025 explicitly warned that artificial intelligence will enable disinformation and deepfakes that can undermine factual information and erode trust in institutions. Separately, the Australian Electoral Commission’s Election Security Environment Overview (produced with the Electoral Integrity Assurance Taskforce context) notes generative AI can be used to produce false narratives, fake images, and deepfake audio and video, adding pressure on voters’ ability to judge what is real.
That combination helps explain why governments can be cautious: you need rules that deter deception, without creating loopholes or overreaching into legitimate political speech.
Targeted Restriction plus Labelling and Consent
South Australia has moved further than most Australian jurisdictions on AI in electoral advertising.
The Electoral Commission of South Australia summarises new restrictions under sections 115B to 115D of the state Electoral Act. It is prohibited to publish or distribute electoral advertising generated by AI that falsely depicts a person doing something they did not do. It can be allowed if the person depicted has given written consent and the material is clearly labelled as AI generated. The ECSA also notes penalties of up to AU$10,000 and says the Electoral Commissioner may order removal and require corrective statements.
This is more precise than a blanket deepfake ban. It focuses on deceptive depictions, then uses labelling and consent as the main safety rails.
The Federal Position
At the federal level, the AEC’s position is clear: there is no prohibition on using AI in election campaigning under the Commonwealth Electoral Act 1918. The AEC points instead to authorisation requirements (so voters can see the source of electoral communication) and to a separate offence that is narrowly about misleading or deceiving an elector in relation to casting a vote.
The AEC also publicly educates voters on deepfakes and AI campaigning, and it links to practical literacy resources (including its Stop and Consider campaign and a disinformation register about election processes).
For the ACT, this matters because it highlights a policy fork:
write targeted rules like South Australia (deception, consent, labelling, remedies), or
lean more heavily on transparency, education, and enforcement of existing rules.
Why “Just Detect Deepfakes” Is Not A Full Answer
Deepfake detection is a moving target. The AEC notes that advances in AI have made deepfakes easier to create, and that some synthetic media is not intended to deceive (for example parody), while other content can be subtle and designed to mislead.
That is why many governance responses are converging on provenance and disclosure rather than pure detection. The AEC points to industry efforts such as the Tech Accord to Combat Deceptive Use of AI in Elections and the C2PA provenance standard for establishing the source and history of media.
License This Article
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
