AI Won’t Replace the Cybersecurity Workforce, But It Will Redefine It

AI is framed as a multi-faceted solution, but the conversation often goes off course.

Ai Your Photo
istock.com/your_photo

Manufacturing has been the most targeted industry by cybercriminals for four consecutive years, according to the IBM X-Force Threat Intelligence Report 2025. Recent incidents show us why. In August, a cyberattack forced Jaguar Land Rover to halt production for a month, setting off losses that impacted its global supply chain.

Most manufacturers already understand the risk. Security teams are being asked to cover increasingly connected IT and operational environments with limited staff and aging systems. Much of their time is still consumed by pulling information together and documenting incidents, and that leaves them with less time for the work that actually reduces risk.

Artificial intelligence (AI) is often framed as the solution to this problem. But the conversation frequently veers into the wrong territory, one that asks whether AI can replace humans altogether. From what I see working directly with chief information security officers (CISOs), that is the wrong question.

Some industries were always going to be early adopters of AI in security. Financial services, for example, have both the budget and the operational maturity to experiment early. But manufacturing and retail are emerging as some of the most motivated adopters. 

In fact, data from the 2025 Deloitte Smart Manufacturing Survey finds that among 600 executives surveyed at large U.S. manufacturing companies 29 percent say they are using AI and machine learning at the facility or network level.

AI is being rolled out to do the kinds of manual work that consumes time without adding insight: collecting data across tools, correlating signals, documenting findings, and assembling reports. These tasks are important, but they don’t require human judgment at every step. Offloading them frees analysts to focus on investigation and response, which is where human context still matters.

What AI isChanging Inside Manufacturing Teams

There is a persistent fear that AI will be used to eliminate entry-level roles or automate away entire tiers of the SOC. In practice, what’s happening looks very different.

Traditional tier-one roles are changing and not disappearing. Instead of spending hours triaging alerts or stitching together timelines manually, analysts now work with AI systems that surface relevant evidence, suggest lines of inquiry, and help document conclusions. The human remains responsible for validating the outcome, understanding the business impact, and deciding what action to take.

This shift benefits both ends of the experience spectrum. Senior analysts reclaim time previously lost to administrative work. Junior analysts gain context faster, with AI helping them understand unfamiliar tools, data sources, and attack patterns. The result is more consistent outcomes across the team, regardless of individual tenure. For manufacturers operating with lean teams, that consistency is critical.

“Signal versus noise” has been a cybersecurity talking point for more than a decade, but . what’s changing now is the ability to operationalize that idea.

AI does not magically eliminate false positives, but it does help correlate signals across systems, summarize what matters, and present a coherent narrative of what happened and why it matters. That narrative, which includes clear timelines, weighted evidence, and documented reasoning, dramatically reduces the cognitive load on analysts.

When teams spend less time deciphering raw data, they detect real issues faster and respond with more confidence. That’s not automation replacing judgment; it’s automation enabling it.

Human Governance Still Matters 

Manufacturers operating in regulated or safety-critical environments are right to be cautious about adopting AI, particularly when sensitive data and compliance obligations are involved. As AI becomes part of security operations, governance has to be built in from the outset, with clear controls over how data is handled and retained, along with confidence that these protections are reflected in the system’s underlying architecture and compliance posture. 

For CISOs, evaluating AI tools increasingly comes down to asking practical questions about data use and system design, because trust is earned through transparency and implementation rather than assurances.

At the same time, the cybersecurity workforce itself is changing as environments grow more complex and teams are expected to work alongside AI as part of daily operations. The fundamentals that have always mattered still apply, including curiosity, sound judgment, and an understanding of how systems behave, but professionals are now expected to use AI to help process information and keep pace with volume without losing accountability for decisions.

Security teams do not need deep expertise in how language models function internally, but they do need clarity about the problems they are solving and the outcomes they are responsible for. 

For manufacturers with lean teams, these changes offer a way to keep effective security operations as complexity increases, and are already shaping the security workforce of the future in manufacturing.

Page 1 of 55
Next Page