Australia Issues First Sanction Over AI-Generated False Legal Citations

Image Credit: Conny Schneider | Splash

An Australian solicitor has become the first in the nation to face professional sanctions for submitting artificial intelligence-generated false legal citations, underscoring ethical challenges as AI tools proliferate in legal practice amid documented risks of inaccuracies.

The Victorian Legal Services Board and Commissioner varied the practising certificate of solicitor Mr Dayal following his submission of unverifiable case references in a family law proceeding, according to regulatory announcements and judicial records. The case highlights the need for rigorous verification of AI outputs, as courts internationally confront similar issues with the technology's tendency to produce fabricated information.

Case Details

In a 2024 hearing before the Federal Circuit and Family Court of Australia in Victoria, Mr Dayal, acting for a party in a family law matter, provided a list of authorities at the judge's request. The references, produced via an AI tool integrated into legal practice software, were found to be nonexistent after court scrutiny.

Mr Dayal acknowledged he lacked full understanding of the tool and did not independently verify the citations' accuracy. The judge referred the conduct to the Victorian Legal Services Board and Commissioner, which concluded its investigation by varying his certificate on Aug. 19, 2025.

The variation prohibits Mr Dayal from practising as a principal solicitor, handling trust money or operating his own firm, restricting him to employment as a supervised solicitor. He also covered costs incurred by the opposing party's legal team due to the procedural disruption.

Background of AI in Legal Practice

Generative AI tools have gained traction in legal work since the release of models like ChatGPT in late 2022, aiding tasks such as research and drafting but prone to "hallucinations" where they generate convincing yet false content. This vulnerability poses amplified risks in legal settings requiring precision.

Precedents include a 2023 U.S. federal court case in New York where two lawyers and their firm were fined US$5,000 for submitting a brief with six fictitious citations from ChatGPT. Incidents have also occurred in jurisdictions like the United Kingdom and Canada, leading to judicial advisories on mandatory human oversight.

In Australia, more than 20 reported instances of AI-related errors have emerged in courts since 2023, involving lawyers or self-represented litigants submitting documents with fabricated references. Recent examples include a Western Australian solicitor ordered to pay A$8,371.30 in costs in August 2025 after citing four nonexistent cases in an immigration proceeding, with the matter referred to the state's legal regulator. Separately, a senior Victorian barrister apologised that month for submissions in a murder trial containing nonexistent case citations and inaccurate quotes from a parliamentary speech, causing a one-day delay.

Reasons and Ethical Implications

Practitioners adopt AI for efficiency in areas like family and migration law, but unverified reliance contravenes obligations to provide accurate information and maintain candour toward the court. In Mr Dayal's instance, the oversight arose from assuming the tool's reliability, resulting in inefficient use of judicial time.

Such errors undermine confidence in the justice system and risk unjust outcomes if overlooked. Australian bodies, including the Law Council, emphasise that AI does not exempt lawyers from ethical duties, recommending education on its limitations.

On a global scale, the case aligns with efforts to establish standards, such as the American Bar Association's Formal Opinion 512, which provides guidance on how existing ethical rules apply to the use of generative AI. Judicial commentary has labelled unchecked AI use as potentially misleading, advocating comprehensive checks.

Impact on Legal Systems

The sanction establishes a national benchmark, indicating regulators' commitment to accountability amid rising AI integration. It could encourage cautious adoption, with firms enhancing protocols for training and validation.

Internationally, it informs ongoing discussions, including in the European Union where AI regulations classify systems used in the administration of justice and democratic processes as high-risk, imposing transparency and oversight requirements. Experts highlight broader concerns for digital integrity, where unverified AI might compromise evidentiary reliability.

Future Trends

AI application in law is expected to grow for functions such as electronic discovery and outcome prediction, with surveys indicating 79% of legal professionals currently using the technology, up from 19% in 2023. Regulatory frameworks are anticipated to evolve, incorporating requirements for AI usage disclosure and risk assessments.

Developments may favour combined human-AI approaches to reduce errors, supported by ongoing professional training initiatives. While outright prohibitions are unlikely to foster innovation, emphasis will remain on ethical implementation to uphold judicial integrity.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Study of 1,131 Users Links Heavy AI Companion Use to Lower Well-Being

Next
Next

Microsoft Launches NLWeb to Bring AI Conversational Agents to Websites