Predicting the Six Biggest Impacts AI Will Have on OT Cybersecurity

No facet of manufacturing will be spared.

Peach Istock Ai Cyber

Artificial intelligence continues to be the source of the most optimism, pessimism, anxiety, predictions, conversations, forecasts, reports, surveys and debate throughout the industrial realm. Whether you're bullish, bearish or just confused on the evolving role of AI, one thing is certain - it is undeniably unavoidable. 

So, in an attempt to tackle this topic from all sides, here's a collection of predictions from leading OT cybersecurity stakeholders. 

Frank Balonis, CISO and Senior VP of Operations, Kiteworks

Third-party AI data handling will emerge as the defining supply chain risk. Manufacturing has always lived and died by supply chain risk, but the nature of that risk is evolving. Survey data tells the story: manufacturing leads all sectors in concern about end-to-end visibility gaps, with two-thirds flagging it as a top priority—far above the cross-industry average.

We're also significantly more worried about a lack of real-time breach notifications from partners than other industries. Now layer AI on top of that. When your contract manufacturer starts running production data through their internal copilot, or your logistics partner feeds shipment information into their optimization models, where does that data go? Does it train their systems? Can it leak into outputs for other customers? 

Traditional vendor assessments weren't built for these questions. I think 2026 is when manufacturers start demanding real answers—attestations about AI data handling, contractual prohibitions on training use, and audit rights for AI systems that touch our data. The suppliers who can't articulate what happens to customer data in their AI stack are going to start losing contracts. 

Josh Taylor, Lead Security Analyst, Fortra

Enterprises will start treating AI systems as insider threats. As agents gain system-level permissions to act across email, file storage, and identity platforms, companies will need to monitor machine behavior for privilege misuse, data leakage, etc. The shift happens when organizations realize their AI assistants have broader access than most employees and operate outside traditional user behavior analytics.

AI agents need cross-functional access to be useful, they operate 24/7, and they make thousands of decisions per day that no human reviews. The first time an AI agent gets compromised through prompt injection or a supply chain attack and starts quietly exfiltrating customer data under the guise of "helping users," organizations will realize they built privileged access with no monitoring.     

By Q2 2026, we will likely see a company sue an AI-assisted system after the AI makes a decision that causes measurable business harm, such as leaked confidential information, violation of a regulatory requirement, or making a commitment the company can't honor.

A lawsuit will likely involve an AI agent that had access to privileged information and disclosed it inappropriately, or an AI assistant that shares proprietary data.  This will force the industry to answer questions nobody wants to ask: Who is liable when an AI you gave permission to act on your behalf does something harmful? The vendor? The company? The AI itself? 

George Gerchow, IANS Research and CSO, Bedrock Data

Failure to red team AI crosses the threshold into criminal negligence territory. Adversarial AI testing will become a board-level accountability issue and a standard line item in D&O insurance policies and audit requirements. 

Failure to red team AI becomes negligent when high-risk workflows lack enforced verification. Traditional phishing drills have failed; it’s time to implement real controls. Executives must publish standing “how I will contact you” policies with approved channels and verification phrases. Any request for data, credentials, funds or banking changes requires out-of-band two-factor verification with a designated approver, and deepfake-resistant procedures become mandatory.

Organizations must combine this with public bug bounties targeting LLM and RAG pipelines, along with pre-production AI red-team gates and quarterly executive reports on findings, fixes and accountable parties. The focus must shift from training people to implementing proof-based systems. 

Dr. Darren Williams, Founder and CEO, BLACKFOG

Shadow AI will emerge as the #1 threat to organizations. The explosive growth in AI usage represents the single greatest operational threat to organizations, putting intellectual property (IP) and customer data at serious risk. 

While AI adoption is growing rapidly, enterprises are increasingly exposed to risks related to data security, third‑party AI tools, shadow AI usage, and governance issues. When sensitive IP or Personally Identifiable Information (PII) is entered into unsanctioned AI systems, the data may be used for model training, stored externally, or exposed in unexpected ways, leading to compliance, IP, and reputational risk.  

Organizations must monitor not only sanctioned AI tools but also the growing ecosystem of micro‑AI extensions and plugins that can quietly extract or transmit data. A global KPMG and University of Melbourne survey of 48,340 individuals across 47 countries found that 48 percent of employees admitted uploading company data into public AI tools, and only 47 percent received formal AI training, underscoring real and growing risk of unsanctioned AI use.

Shadow AI will also become the biggest cost amplifier in data breaches. As employees and teams adopt unsanctioned AI tools outside of IT oversight, hidden vulnerabilities multiply, turning minor data leaks into costly breaches. Studies by IBM show that breaches involving unmanaged AI tools (“Shadow  AI”) are substantially more expensive, adding an average of $670,000 or more in additional costs.

Karl Holmqvist, Founder and CEO, LASTWALL

Broken trust and reckoning after the Wild West of AI Deployment. The unchecked rush to deploy AI without proper safeguards will trigger a major security and trust reckoning in 2026. 

Over the past year, countless AI tools and systems rolled out with minimal oversight… and the fallout is coming due. We anticipate the first high-profile security breach caused directly by an autonomous AI agent in 2026, validating warnings that poorly governed AI can create new failure modes. 

Attackers are already leveraging AI as a force multiplier: classic threats like phishing are being supercharged by flawless deepfake voices and personalized automation, allowing minor vulnerabilities to chain into major breaches at machine speed. 

In 2026, smart organizations will rein in some of their initial AI deployments with rigorous security assessments, access controls, and real-time monitoring of AI behaviors. However some will not, and the results will be devastating. 

Hopefully we’ll see the rise of AI governance frameworks and possibly new laws holding companies accountable for AI-induced harm. Meanwhile, the deluge of deepfake-generated disinformation and fraud will prompt a fight for digital truth. As AI blurs the line between reality and fabrication, the concept of authenticity is emerging as the new pillar of cybersecurity. Companies will start investing in verification technologies (watermarks, provenance tracking, digital signatures) to ensure that what users see and hear is genuine.

Page 1 of 55
Next Page