CFOtech New Zealand - Technology news for CFOs & financial decision-makers
Realistic control room industrial equipment computer monitors warning cyber threat

AI risks intensify cyber threats to critical infrastructure OT

Fri, 5th Dec 2025

Cyber security concerns are intensifying as artificial intelligence is increasingly introduced to operational technology environments. The deployment of AI across critical infrastructure is creating new systemic risks, according to leaders in the field.

OT security risk

Many organisations in industrial sectors are integrating AI to drive efficiency through predictive maintenance, anomaly detection, and optimisation tools. However, Rob Demain, Chief Executive Officer at e2e-assure, believes that security protocols are lagging behind the pace of adoption. He warned that AI could introduce model drift and misgeneralisation into operational technology (OT) environments, potentially leading to unsafe decisions and safety-process bypasses if AI recommendations override established manual checks.

Connectivity associated with AI, such as the use of application programming interfaces and cloud services, is increasing the number of ingress points into OT networks, adding complexity to the security landscape for critical infrastructure operators.

AI-powered threats

While current adoption of AI within OT remains limited, some organisations are beginning to pilot large language model (LLM)-based assistants to support engineering and operational tasks. Demain sees clear indications that adversaries are already using AI to develop advanced tactics. He said the use of AI in cyber attacks is not only theoretical, as attackers are employing it for productivity enhancements and dynamic command generation, making detection more difficult.

There is evidence that AI is enabling development of polymorphic malware. Communication channels powered by AI, serving as command-and-control links, can blend into legitimate traffic. This behaviour allows malicious activity to evade traditional OT security measures, such as signature-based detection and static indicator of compromise (IOC) matching.

Defensive challenges

Demain said defenders now need to view both external LLM API traffic and internal model operations with the highest level of scrutiny. He highlighted concerns around local LLMs, pointing out that these models often contain sensitive data that could benefit attackers. The models themselves could serve as a roadmap for cybercriminals seeking to escalate their attacks.

The concept of "Living off the land" - the use of legitimate tools and functions to conduct attacks - is evolving into what some researchers call "Living off the LLM", as attackers leverage AI-native capabilities for covert actions inside OT environments.

Guidance limitations

The United States Cybersecurity and Infrastructure Security Agency (CISA) has issued new guidance recommending that AI systems be kept separated from OT networks. This involves providing only read-only data feeds and ensuring data flows from OT to IT, but excluding AI from visibility or control over OT systems.

Demain remains concerned that regulatory guidance does not go far enough. He stated that current advice is conservative but suggested a stronger stance is required to protect critical operations.

"The latest advice from CISA is good in terms of keeping AI away from OT (ie. provide a read only data feed to it), sending data safely from OT to IT but not including AI where it could see/control OT systems. I do think they could go harder and discourage AI use on anything connected to OT. Safety first should mandate that these systems should be treated as a safety risk to operations at this stage," said Demain, Chief Executive Officer, e2e-assure.