
The National Security Agency (NSA) is joining the Cybersecurity and Infrastructure Security Agency (CISA), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), and others in releasing the Cybersecurity Information Sheet (CSI), Principles for the Secure Integration of Artificial Intelligence in Operational Technology.
While AI presents the potential to enhance efficiency, productivity, decision-making, and customer experiences, adopting AI into operational technology (OT) systems introduces new risks. Understanding and carefully managing the associated risks are critical in protecting the safety and security of OT systems.
The report describes different ways that AI can be integrated into OT and outlines four principles critical infrastructure owners and operators should follow to both leverage the benefits and minimize the risks of integrating AI into OT environments. The principles detail guidance to understand AI; consider AI use in the OT domain; establish AI governance and assurance frameworks; and embed safety and security practices into AI and AI-enabled OT systems.
Key mitigations highlighted include:
- Ensure proper understanding of the unique risks that AI brings.
- Only integrate AI when there are clear benefits that outweigh the risks.
- Push data from the OT environment to a separate AI system where appropriate.
- Establish clear governance with through testing and monitoring.
- Incorporate a human-in-the-loop.
- Implement fail-safe mechanisms to limit the consequences of failures and worst-case scenarios.
Industry stakeholders weighed in on the release. Their comments follow.
Trey Ford, Chief Strategy and Trust Officer at Bugcrowd
AI is a force multiplier - raising the velocity our operators can move, the level of vigilance our teams are capable of, and the level of complexity that can arise in troubleshooting.
The HOW: AI is making us faster, more efficient, and hopefully safer.
The WHAT and WHY, however, are the challenges we need to focus on when asking, ‘Is this implementation fit for purpose?’ Specifically, when we give automated agents autonomy (true agency), we need to operationalize how humans stay in the loop, tune, and troubleshoot these capabilities over time.
I would stress an incremental process in rolling out new capabilities (AI, or otherwise). This helps to fault-isolate, streamline troubleshooting, and operationalize stability. Without this, human operators will quickly become overwhelmed by automated decision-making, and the failure modes will get increasingly complicated.
Pay strict attention to cognitive biases : the agents will have localized focus, and the human operators will unconsciously fixate on the alerts generated by the agentic AI - not the broader system and downstream impacts of agentic decision-making.
Agnidipta Sarkar, Chief Evangelist at ColorTokens
This latest guidance is a brilliant effort by CISA, NSA, ACSC, and others to address a much-neglected OT cyber defense in the AI age. This was essential. But, while the document focuses heavily on preventing AI compromise through a secure development lifecycle, it offers minimal guidance on containing a compromised AI system.
When an attacker gains access to an AI model making OT decisions, traditional perimeter controls fail immediately. The document assumes organizations will DETECT when AI systems are compromised or manipulated. In a world where the average dwell time in OT environments is 237 days, a poisoned AI training data or prompt injections could operate undetected for months.
I believe this can be enhanced by adopting a zero-trust approach and not assuming that enterprises will succeed at "proper AI governance". The reality is that AI adoption remains nascent, fraught with cybersecurity risks, and largely uncontrolled. Given that the gap between innovative, speedy AI adoption and the adoption of foundational zero-trust principles is only widening, the authorities must make zero trust a foundational principle of any AI adoption.
Governance is critical to succeed, however, without foundational controls to narrow down the attack path and reduce opportunities to exploit the blast radius of systems attacked by AI, AI in OT is a recipe for disaster. Especially because we are still at a stage where we are discovering CVE9+ vulnerabilities in OT systems, which are costly to replace and undermine the competitive benefits of industrial organizations. In my view, we must shift from 'a secure AI deployment' to ‘a breach-ready AI deployment', because attackers will exploit AI systems regardless of how well we follow these principles.
Marcus Fowler, CEO of Darktrace Federal
These new principles offer timely and practical guidance to safeguard resilience and security as AI becomes central to modern OT environments. It’s encouraging to see a strong focus on behavioral analytics, anomaly detection, and the establishment of safe operating bounds that can identify AI drift, model changes, or emerging security risks before they impact operations.
This shift from static thresholds to behavior-based oversight is essential for defending cyber-physical systems where even small deviations can carry significant risk.
The guidance also encourages caution around LLM-first approaches for making safety decisions in OT environments, based on unpredictability and limited explainability, creating unacceptable risk when human safety and operational continuity are on the line. It's important to use the right AI for the right job.
Taken together, these principles reflect a maturing understanding that AI in OT must be paired with continuous monitoring, and transparent and distinct identity controls. We welcome this guidance and remain committed to helping operators put these safeguards into practice.
We continue to see growing recognition of AI’s operational value in cybersecurity, as seen in recent NDAA provisions from bipartisan members of the House Armed Services Committee that emphasize AI-driven anomaly detection, securing operational technology, and incorporating AI into cybersecurity training - a proactive step toward strengthening U.S. cyber readiness.
April Lenhard, Principal Product Manager at Qualys
The new joint guidance canonizes the fact that when critical infrastructure is involved and lives are at stake, AI must be incorporated as an extra set of eyes and not as an unsupervised pair of hands.
This shows our global posture with emerging technologies has correctly transitioned from “trust but verify” to “verify, then also verify again in new ways." The emphasis on secure development, educating personnel on AI risks and limitations, and enumerating data challenges reflect a great launch point for further exploration.
Thomas Wilcox, VP, Security Strategy at Pax8
New SIEM and SOAR technologies are rapidly incorporating AI threat analysis and active response capabilities.
While SIEM and SOAR have been buzzwords for years now, the technology is finally showing real value with the emergent threats associated with large-scale OT compromise and patterns of compromise that humans likely would miss. AI is showing it has a valued place in providing rapid visibility and response.
When these technologies get paired with capable endpoint threat detection, organizations gain actionable views into the point of most compromises, the human endpoint. Finally, we see increased capabilities emerging to find indications of compromise on the Internet or Dark Web. Again, these leverage AI to actively search for signs that a company may have been breached, as a last line to minimize the impact.
The reality is that the industry is generally lagging behind the capabilities of APTs and AI in terms of attack capabilities. We need to move more quickly to leverage AI and meet the challenge.















