Artificial intelligence is rapidly becoming one of the most valuable assets inside modern organisations. Protecting the networks that train, operate and distribute AI systems is therefore no longer just a cybersecurity challenge, but a strategic requirement that determines whether AI can scale safely and sustainably.
Artificial intelligence infrastructure is expanding at a pace rarely seen in enterprise technology. Data centres are being redesigned for high-density GPU clusters, optical networks are evolving to support distributed training workloads, and organisations are deploying inference capabilities across edge locations ranging from factories to retail estates. Yet beneath the excitement surrounding this rapid expansion lies a less visible reality. As AI systems grow in scale and economic value, they create a new category of infrastructure risk that traditional security models were never designed to address.
The challenge is not simply that AI workloads are large. It is that they operate across environments that are simultaneously physical, digital and operational. Power systems, cooling infrastructure, networking fabrics and cloud platforms all form part of the same execution environment. Protecting AI therefore requires security thinking that extends well beyond conventional IT boundaries.
The cyber physical convergence
For Schneider Electric, which works closely with hyperscale and enterprise data centre operators, the security conversation increasingly begins with infrastructure itself. “Some of these measures were already necessary historically,” explains Kreshnik Musaraj, CSO for Secure Power, Data Center, and One Services at Schneider Electric. “The difference today is that they become mandatory and above all integrated. One of the key points is connecting physical access controls with the Security Operations Center. If someone breaches a restricted perimeter, it should not only alert physical security, it must also trigger awareness and response at the cybersecurity level. We are in a cyber-physical world, not just a physical one.”
That convergence between physical infrastructure and digital security represents one of the defining characteristics of AI-scale environments. Unlike traditional enterprise IT systems, AI infrastructure depends on highly specialised physical assets, including power distribution networks, cooling systems and environmental monitoring platforms. These systems are now connected, instrumented and often software-driven, which means they can become attack surfaces as well as operational dependencies.
“Real-time reactive capability is equally important,” Musaraj continues. “It is not enough that incidents are recorded. There must be immediate counterreaction. Access limitation down to the device level, reduction of access vectors to IT assets, strict discretionary access policies and third-party security assurance for personnel are essential. Power and cooling systems, even if located in basements, must not be neglected. If misused, they can bring down the entire infrastructure.”
The new attack surface
The scale of the problem grows further once the operational complexity of AI networks is considered. Training clusters involve thousands of accelerators communicating continuously across high-speed networks, exchanging enormous volumes of data as models learn and adapt. Inference systems distribute those models across edge locations where latency, reliability and regulatory constraints all influence architecture decisions. Every connection within this ecosystem represents a potential security exposure.
For James Tucker, Head of CISO for EMEA at Zscaler, the nature of AI-related attacks is often misunderstood. “Most attacks are not sophisticated data poisoning or model manipulation,” he says. “They are prompt injection, tricking the system into bypassing its guardrails and extracting information it should not provide. The key is to stop thinking of AI as some magical new category. It is a way to create and move data, which means attackers focus on the data itself.”
That observation highlights an important shift in the cybersecurity landscape. Traditional network security models were designed primarily to protect infrastructure boundaries. Firewalls, VPNs and segmentation strategies focused on controlling who could enter or exit a network. AI environments, by contrast, often involve continuous internal data movement across distributed systems where the concept of a clear boundary becomes much harder to define.
The implications are particularly visible when organisations attempt to apply legacy security tools to modern AI environments. “If traditional means legacy appliances or hub-and-spoke VPNs, then yes they struggle with modern AI workloads,” Tucker explains. “But for organisations already running cloud-based security, the principles have not changed. You are still protecting data in motion and at rest, just at different speeds and scales.”
The real vulnerability, he argues, frequently lies not in exotic AI-specific exploits but in everyday behaviour. “Cybersecurity professionals love talking about complex AI attacks,” Tucker notes. “What do I see? Employees pasting customer data into ChatGPT because their deadline is tomorrow and IT takes three weeks to approve anything. Recent telemetry shows seventy-seven percent of employees paste confidential data like client lists, financial numbers and source code directly into generative AI tools, and eighty-two percent of those actions happen through unmanaged accounts that bypass corporate identity systems entirely.”
These behavioural risks illustrate why AI security cannot be treated solely as a technical problem. Protecting models and training data requires visibility across the entire data lifecycle, from development pipelines and inference APIs to the everyday interactions employees have with generative AI tools.
Securing data at scale
One of the most widely discussed approaches to addressing this complexity is zero-trust architecture, which replaces implicit trust within networks with identity-based access controls applied to every system and workload. “Zero trust is about identity,” Tucker explains. “That applies to workloads just as much as people. The question is not GPUs talking to each other. It is workloads with identities communicating securely. You create ring-fenced environments where workloads can move data quickly inside the enclave, but every entry and exit point is tightly controlled. Default deny, explicit allow, and strong identity for every workload.”
Designing those ring-fenced environments requires careful balancing. AI training clusters generate enormous east-west traffic flows as nodes exchange parameters and synchronise results. Excessive segmentation can degrade performance, while insufficient segmentation can expose sensitive models or training data to unintended access.
“You do not segment every individual conversation, or you turn your training cluster into a slow-motion film,” Tucker says. “Keep the inside clean and secure and treat everything crossing the boundary as hostile until proven otherwise.”
Infrastructure security realities
While digital security frameworks are evolving rapidly, the infrastructure supporting AI operations presents its own distinct challenges. Power systems, cooling platforms and monitoring technologies are becoming increasingly intelligent and connected, expanding the attack surface within data centre environments.
“Vulnerabilities are not new,” Musaraj explains. “What changes is their typology and scale. A three-phase UPS that once operated with relatively simple firmware is evolving toward something closer to an operating system. As connectivity increases, the attack surface becomes larger and more exploitable.”
This transformation is part of a broader shift toward software-defined infrastructure, where operational technologies increasingly resemble traditional IT systems in terms of complexity and connectivity. As a result, infrastructure components must now be protected using cybersecurity principles that were historically reserved for applications and servers.
“The response requires a mindset shift,” Musaraj continues. “Security must be embedded from research and development through deployment and lifecycle management, following a secure development lifecycle and strict compliance with cybersecurity regulations. Architecture-level threat modelling is fundamental.”
The operational environment in which AI infrastructure operates further complicates security management. Data centres vary widely in their maturity levels, operational practices and legacy configurations, which means identical technologies can face very different risk profiles depending on how they are deployed. “The same product may be deployed in very different conditions,” Musaraj says. “Some environments are legacy installations with historical configurations, while others are new facilities designed with security in mind. Maturity levels differ and so does operational discipline.”
This variability makes lifecycle management a central component of AI security strategy. Infrastructure that was secure when deployed may become vulnerable as new threats emerge or operational practices change. “What was not vulnerable yesterday may become vulnerable tomorrow,” Musaraj observes. “Vulnerabilities evolve and the installed base must evolve with them. Structured servicing, upgrade programmes and proactive remediation across the install base are necessary to prevent risk accumulation over time.”
Predictive analytics and monitoring technologies are increasingly being used to manage this lifecycle risk. By analysing infrastructure telemetry, operators can anticipate maintenance requirements, identify anomalies and plan security interventions before disruptions occur. “Predictive maintenance and predictive analytics allow operators to forward plan,” Musaraj explains. “Instead of reacting to disruption, organisations can anticipate it by removing or reducing potential attack surfaces as part of planned interventions.”
Security as strategic capability
Beyond operational security, regulatory frameworks are beginning to shape how organisations design and govern AI systems. The EU AI Act, the NIST AI Risk Management Framework and emerging national policies are introducing new expectations around transparency, accountability and data protection. “The main frameworks to watch are the NIST AI Risk Management Framework and the EU AI Act,” Tucker says. “Both focus on transparency, accountability and data protection. But organisations are still struggling with basics like data classification. If you do not know what data you are protecting, segmentation becomes guesswork.”
This regulatory evolution reinforces the idea that AI security is becoming a strategic capability rather than a defensive obligation. Organisations that design secure architectures from the outset will be better positioned to scale AI deployments across jurisdictions, industries and regulatory regimes.
The alternative approach, building AI infrastructure first and adding security controls later, is far more risky. “Most organisations build AI environments first and add security afterwards,” Tucker notes. “Cloud deployments move fast, which encourages ad hoc approaches. Security should be part of the philosophy from day one, not something bolted on after the infrastructure is already running.”
As artificial intelligence becomes embedded within economic and industrial systems, the stakes surrounding its security will only increase. AI models represent intellectual property, strategic capability and operational insight all at once. The networks that connect these systems therefore carry extraordinary value.
The emerging lesson from organisations operating at the forefront of AI infrastructure is that security must evolve at the same pace as the technology it protects. That evolution requires collaboration between cybersecurity professionals, infrastructure engineers, regulators and business leaders, each responsible for different layers of a highly interconnected ecosystem.
AI may be defined by algorithms and data, but the systems that make it possible depend on infrastructure. Protecting that infrastructure at scale is not simply a defensive necessity. It is becoming one of the defining capabilities of organisations that intend to compete in the AI economy.



