Australia’s First National AI Plan: AUD 29.9M for New Safety Institute
AI-generated Image (Credit: Jacky Lee)
Australia has released its first National AI Plan, setting out a government roadmap to expand artificial intelligence across the economy while relying primarily on existing “technology-neutral” laws and regulators rather than a single, economy-wide AI Act. The plan was launched on 2 December 2025 by the Department of Industry, Science and Resources and framed around three goals: capturing opportunities, spreading benefits, and keeping Australians safe.
At the centre of the new approach is the creation of an Australian AI Safety Institute (AISI), backed by AUD 29.9 million in funding, with operations expected to begin in 2026. The institute is intended to monitor, test and share information on emerging AI capabilities, risks and harms, and to provide technical advice to ministers and existing regulators rather than replace them.
The plan marks a clear policy shift away from earlier proposals for mandatory national guardrails on “high-risk” AI systems. Instead, the government is signalling a more flexible model that expects sector regulators, supported by the new institute, to apply and sharpen the tools already available under laws such as privacy, consumer, and anti-discrimination frameworks, alongside ongoing reforms to modernise those regimes.
Economic Promise, With a Cautious Regulatory Bet
The plan links Australia’s AI ambitions to national productivity and competitiveness. It aligns with recent public commentary that large economy-wide gains from AI are possible, while cautioning that poorly calibrated regulation could dampen investment.
Rather than presenting a single definitive “AI dividend” number as a government forecast, the plan’s economic framing sits alongside external modelling and recent policy debate that has emphasised both upside and the need to avoid premature economy-wide rules.
Data Centres, Compute and “Sovereign” Capabilities
A major theme is infrastructure and capability building, especially compute, data access and a stronger local ecosystem. The plan highlights Australia’s push to attract and scale data centre investment and advanced digital infrastructure. Recent announcements and projects referenced in official material include major investment commitments from global cloud and data centre firms.
This infrastructure agenda is also tied to risk management. Some commentary around the plan has emphasised the importance of reducing over-reliance on overseas models and platforms for sensitive government use, a theme echoed in public reporting on “sovereign” AI directions linked to the plan’s aims.
Scaling Adoption, Especially for SMEs and Regions
On adoption, the plan builds on work led by the National AI Centre and flags targeted support aimed at smaller organisations that may otherwise fall behind. The government has invested AUD 17 million in the AI Adopt Program for SME support, with steps to bring these efforts under the National AI Centre’s remit.
The plan also emphasises digital and AI inclusion, noting the gap between metropolitan and regional adoption and broader digital exclusion challenges affecting First Nations communities and other cohorts.
Skills and the Workplace
The strategy places significant weight on workforce transition and consultation. It stresses that AI deployment in workplaces should involve meaningful engagement with workers and unions, and that privacy and psychosocial risks need attention as AI and algorithmic management tools spread.
This focus aligns with the early response from organised labour, which has welcomed the plan’s explicit framing of workers’ rights and conditions as part of Australia’s AI transition.
Practical Tools Over Symbolic Declarations
Rather than relying solely on a separate, sweeping “public service AI plan” narrative, the National AI Plan sits alongside concrete government initiatives and updated frameworks for responsible AI use in agencies. It references ongoing development of shared platforms and tools intended to help public servants use AI safely and productively.
A New Institute, Old Laws, and Targeted Reform
The plan’s safety architecture is designed to complement existing regulators. The AISI is expected to generate technical insights on both upstream model risks and downstream harms, sharing information to support compliance and enforcement across sectors.
It also flags areas where law reform may still be needed, such as privacy modernisation, automated decision-making safeguards, and copyright and IP questions raised by AI training and content generation, without committing to a single new overarching AI statute at this stage.
Broad Support for Direction, Unease About Depth
Early responses have been mixed but generally recognise the plan as a significant first national consolidation of AI policy.
Industry groups and business-focused commentary have welcomed the emphasis on adoption, infrastructure and regulatory restraint, while continuing to press for clearer settings around innovation funding and copyright.
Unions have endorsed the plan’s worker-centred language and expectations of consultation in workplace AI rollouts.
Greens figures and some civil society voices have criticised the retreat from mandatory national guardrails, arguing the approach may lack sufficient enforcement bite.
Academic and expert commentary, including reactions associated with UNSW’s AI leadership, has broadly supported the creation of the AISI but questioned whether current funding and tools will be enough compared with larger overseas initiatives.
International Context
Australia’s approach now sits between more prescriptive and more permissive models abroad. The EU AI Act has entered into force with a risk-based structure and phased obligations, including specific requirements for general-purpose AI over time.
In Asia, countries are pursuing different blends of industrial policy and safety governance. Japan has advanced a national AI-oriented policy and legislative direction focused on promotion and governance principles, while South Korea has moved toward a formal framework that also reflects risk-based thinking.
China has continued to frame global AI governance priorities through international statements and action-plan-style initiatives, emphasising broad principles and state-led coordination.
We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.
