US & Australia Lead New 25-Page Guide to Secure AI in Critical Infrastructure

AI-generated Image for Illustration Only (Credit: Jacky Lee)

Cybersecurity agencies from the United States and Australia have led a multinational effort to help owners and operators of critical infrastructure integrate artificial intelligence into operational technology environments more safely, amid concerns that AI can expand the cyber risk landscape for essential services.

The 25-page guidance, titled “Principles for the Secure Integration of Artificial Intelligence in Operational Technology”, was published on December 3, 2025 as a Cybersecurity Information Sheet. It was co-authored by the US Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), in collaboration with the US National Security Agency’s Artificial Intelligence Security Center (NSA AISC), the FBI, and national cyber agencies from Canada, Germany, the Netherlands, New Zealand and the United Kingdom.

The document notes that as AI adoption accelerates across industry following the public release of ChatGPT in 2022, critical infrastructure providers face a dual reality: AI can improve efficiency, decision support and maintenance planning, but it can also introduce new safety, reliability and cybersecurity risks when embedded in systems that manage essential public services.

A Step-by-Step Blueprint for AI in Industrial Systems

At the centre of the guidance are four sequential principles intended to help organisations evaluate, deploy and oversee AI in OT environments without compromising safety or operational continuity. The authors emphasise that these steps apply across AI techniques such as machine-learning models, large language models and AI-enabled software that may automate or recommend actions in industrial settings.

First, “Understand AI”. The guidance calls for organisations to develop a shared baseline of AI literacy among OT, IT and security teams, with particular attention to risks unique to AI use in operational settings, such as safety-process bypasses, model drift, and the potential for attackers to manipulate training or input data. The goal is to ensure staff can evaluate AI performance limits and recognise failure modes before systems are deployed in live environments.

Second, “Consider AI Use in the OT Domain”. The document urges operators to validate that an AI solution is necessary for the specific operational problem at hand, then assess the security implications of data collection, storage and model training. It highlights the complexity of OT datasets and the value of those datasets to adversaries, encouraging careful attention to where data is processed and how AI systems might create new exposure points.

Third, “Establish AI Governance and Assurance Frameworks”. The guidance recommends that AI be brought under existing organisational governance structures, with expanded testing and evaluation practices that account for AI-specific risks. Continuous validation of models, clearer accountability for AI-enabled decisions, and integration of AI into cybersecurity assurance processes are emphasised as key requirements for safe adoption in high-stakes environments.

Fourth, “Embed Safety and Security Practices into AI and AI-Enabled OT Systems”. The authors stress the importance of oversight and failsafe mechanisms that preserve human control and allow organisations to revert to conventional operations if AI behaviour becomes unreliable or security incidents are suspected. The guidance also encourages explicit planning for AI-related incident response and recovery scenarios, reflecting the potential for AI to introduce novel threat pathways in industrial systems.

The document positions these principles as a way to improve resilience without requiring a wholesale replacement of legacy OT environments, many of which were not designed for modern AI lifecycles.

Where AI Fits in Real-World OT Architectures

A distinguishing feature of the guidance is its use of the Purdue Model to contextualise AI adoption across industrial layers. The document notes that predictive and other machine-learning methods are typically most relevant within operational layers (Levels 0–3), while large language models are more commonly placed in business-context layers (Levels 4–5), potentially with data exported from the OT network. This mapping is meant to help organisations identify where AI creates the greatest value, and where it may pose the highest risk, across field devices, supervisory control and enterprise systems.

By connecting AI use cases to a familiar OT architecture model, the authors aim to reduce confusion among operators who must balance innovation against risk in complex, interdependent environments.

Why the Guidance Arrives Amid Growing AI-OT Convergence

The release reflects a broader international push to shape AI deployment in critical infrastructure through practical, security-first frameworks rather than standalone mandates. The guidance frames AI as a powerful but potentially fragile layer in cyber-physical systems, warning that poor implementation can create safety risks alongside cybersecurity vulnerabilities.

Public commentary around the release has similarly emphasised that OT disruption carries real-world consequences and that AI integration should be matched with rigorous safeguards.

Who Shapes the Global Response?

The authorship list underscores the multinational nature of the effort. Alongside CISA and ASD’s ACSC, contributing agencies include the NSA AISC, the FBI, the Canadian Centre for Cyber Security, Germany’s Federal Office for Information Security (BSI), the Netherlands’ NCSC, New Zealand’s NCSC and the UK’s NCSC. The guidance is marked TLP:CLEAR, signalling its intent to be widely used across both government and industry.

The approach aligns with a broader trend of “secure-by-design” expectations for AI in high-consequence environments, and places responsibility on owners, operators and vendors to manage AI risks through shared standards and disciplined deployment practices.

How This Guide Fits the Evolving Patchwork of AI Rules

The guidance arrives as jurisdictions continue to develop or refine AI governance models.

In Europe, the EU AI Act entered into force on 1 August 2024, with a staggered application timeline: prohibitions and AI literacy obligations apply from 2 February 2025, general-purpose AI obligations from 2 August 2025, and many high-risk system rules becoming applicable from 2 August 2026, with some high-risk AI embedded in regulated products subject to an extended transition to 2 August 2027. Penalties for serious violations can reach €35 million or 7% of global annual turnover, depending on the infringement category.

Australia has also signalled a stronger national posture on AI opportunity and safety. The federal government published its National AI Plan on 2 December 2025, framing goals to build capability, broaden adoption and “keep Australians safe”, alongside plans for an AI Safety Institute to become operational in early 2026.

Against this shifting regulatory and policy backdrop, the CISA-ACSC guidance stands out for its practical OT focus and its effort to integrate AI risk thinking into established industrial security frameworks rather than treating AI as a wholly separate governance domain.

A Practical Baseline for Safer AI in Essential Services

The guidance is likely to serve as a reference point for organisations trialling AI within energy, water, transport and other operationally sensitive sectors. Its emphasis on education, business-case evaluation, governance integration and safety-by-design suggests that future OT-AI policy may increasingly move toward “layered assurance” models that combine technical testing, operational oversight and clear organisational accountability.

For critical infrastructure owners and operators, the message is consistent: AI can support resilience and performance, but only if adoption is deliberate, well-governed and designed to fail safely. The multinational authorship of the document indicates a shared interest in preventing AI from becoming a destabilising variable in the cyber-physical systems that underpin modern societies.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Next
Next

Australia’s First National AI Plan: AUD 29.9M for New Safety Institute