
When hackers target a manufacturer, the impact ripples beyond IT, disrupting everything from the factory floor to the finished product.
Last year, a ransomware attack at Johnson Controls disrupted the company's internal systems, affecting its work in industrial controls, building security and facility operations.
JBS SA had to halt all U.S. cattle slaughtering for a day after a cyberattack hit its operations in the meatpacking industry.
ASCO, a major global supplier of airplane parts, was forced to shut down four factories for several days due to ransomware infecting its internal systems.
In the semiconductor industry, Applied Materials lost $250 million after a supply chain attack on one of its business partners disrupted its production.
This litany of examples — the complete list is even longer — reminds us that cybersecurity failures don't stay in IT; they affect the safety (operations, employees, environment), the supply chain and, ultimately, a company's financial bottom line. It also explains why more manufacturers are turning to artificial intelligence (AI) to flag anomalies and weak spots in their networks before attackers can exploit them.
AI can add real defensive power if used correctly. By scanning for subtle irregularities, AI can help catch threats early, preventing disruptions that would otherwise shut down lines or break delivery promises.
However, manufacturers looking to close security gaps have some hard work ahead, especially in operational technology (OT) environments, where systems weren't built for high-update frequency or autonomous features. It doesn't help that many facilities still rely on legacy systems with security vulnerabilities. AI can help, but you need to know how those systems function, what risks they introduce, and how they're integrated into your OT cybersecurity program.
The Legacy Challenge
Legacy OT systems weren't built to be constantly updated or to learn independently. That's why it's critical to know what AI is evaluating, whether the anomaly is already in your gear, and whether that behavior is acceptable or should be shut off altogether.
Unlike IT, where software changes are routine, OT systems prioritize stability, uptime, and safety. That means any AI-enabled feature, especially when it's there by default, can introduce new risks if it's not fully understood or adequately configured.
In some cases, organizations don't even know those features are active. That's a problem because you can't secure what you don't understand. So the key isn't just deploying AI; it's knowing exactly where it's running, what it's doing, and whether the risk it introduces is acceptable. Since AI learns from the data it ingests, organizations need to scrutinize the quality and source of that data. If the input is flawed, any output will be, too.
AI systems rely on accurate information to work correctly, and bad data leads to bad calls. A wrong move from a faulty system can endanger lives, which is all the more reason why human operators still need to verify the data and double-check conclusions.
Red Flags
The problem with a lot of AI tools is they don’t explain why they’re raising a flag. That lack of context can slow people down or, worse, push them to make decisions based on bad information. It also leads to alert fatigue where people start ignoring warnings because they’re drowning in them. I’ve seen teams roll out new cyber tools, only to be hit with thousands of alerts they can’t make sense of. Without a way to filter the noise, it’s nearly impossible to spot the real threats.
Then, there's embedded AI. I've worked with equipment — like variable frequency drives — where teams didn't even know those features were active. You need to know what those systems are doing and whether you're okay with it. If not, you've got to be able to go in, adjust the settings or shut those features off altogether. But none of that's easy if you don't have the visibility or control you need. That's why plugging in AI tools isn't enough; you need to know exactly how it behaves in your environment.
Attackers often work their way into manufacturing systems by compromising suppliers or slipping in through software, sensors, or embedded components. That’s why it makes sense for CISOs to reassess IT and OT priorities during construction or major upgrades. In manufacturing settings where uptime and safety can’t be compromised, building cybersecurity in from the start not only strengthens defenses, it also helps avoid the cost and headaches of reworking systems later.
The Broader Risk Landscape
Managing AI risk starts with understanding where it resides — whether it's inside your organization, within OT systems, or beyond your network perimeter. Each environment brings its own unique challenges.
Inside the enterprise, staff might use tools that security teams haven't seen. If employees or contractors rely on unapproved AI tools, they could unknowingly open the door to data exposure or system-wide threats. Companies need clear cybersecurity guidelines spelling out what's allowed to ensure everyone understands why data protection matters. Guardrails like training, device settings and cyber policy enforcement go a long way toward limiting internal risk.
Third-party partners may be using AI tools you can't monitor outside your network. That's where culture and explicit cybersecurity policies and procedures make a real difference. When people know what's expected inside your team or across your vendor ecosystem, they're less likely to introduce risk.
If manufacturers are to unlock AI's full potential, they will require more than technology. They'll need clear visibility across systems, ongoing training, and tighter collaboration between IT and OT.
Everything depends on an organization's understanding of the specific cyber threats and how carefully it integrates AI capabilities into its OT cybersecurity programs. Getting that alignment right isn't a luxury in high-stakes environments like manufacturing, where a single cyber incident can halt production or compromise safety. It's a prerequisite for resilience.