Enterprises have accepted that AI will not live in a single data centre or hyperscale region. The unresolved challenge is how to operate intelligence that stretches from core to edge, across clouds and sovereign domains, without losing control of performance, economics or resilience.
Architecturally, the debate is largely over. Serious organisations no longer assume that AI will sit neatly inside one hyperscale region or within the walls of a private data centre. Training may be more centrally concentrated in large AI factories, but inferencing increasingly moves closer to data sources and users. Sovereignty requirements fragment infrastructure across jurisdictions. Energy constraints influence where workloads can realistically run. The cloud continuum, spanning core, metro and edge, is not an aspirational diagram. It is becoming the structural shape of AI deployment.
Yet architectural inevitability does not equal operational readiness. Patrick McCabe, Director of marketing for AI networks, of Nokia argues that most enterprises are strategically aligned but operationally exposed. “There is broad agreement that AI will span public cloud, private cloud and edge,” he says. “The gap is in knowing how to operate that environment as a single, coherent system. The tooling, automation and processes have not kept pace with the architectural shift.” The risk is not that the continuum fails to materialise. The risk is that it materialises faster than organisations can manage it.
The challenge intensifies as AI moves from experimental to mission critical. Proofs of concept tolerate inefficiency. Production systems do not. As AI underpins customer experience, operational optimisation and revenue generation, the tolerance for unpredictability shrinks. “When AI services are embedded into the business,” McCabe notes, “network behaviour directly influences outcomes. That makes operations far more consequential than in traditional IT estates.”
Distributed inferencing turns networks into control systems
One of the least appreciated consequences of distributed AI is the way it transforms the role of the network. Historically, enterprise networks transported data between applications and users. In distributed AI environments, the network increasingly synchronises decision-making across locations. “When you distribute inference,” McCabe explains, “the network becomes part of the execution path of intelligence. It is coordinating state, ensuring consistency and feeding models with data in real time. It is not simply forwarding packets.”
That shift alters performance requirements. Latency becomes more than a user-experience metric; it becomes a determinant of decision quality. In manufacturing, milliseconds can affect control systems. In financial services, micro-delays influence transaction outcomes. In retail, customer-facing AI assistants must respond instantly to maintain engagement. “Distributed inferencing effectively turns the network into a control system,” McCabe says. “If latency or congestion fluctuates, the AI output fluctuates. That is a very different operational dependency.”
Split inference models compound this complexity. Portions of a model may execute at the edge for responsiveness, while more computationally intensive layers remain centralised. This design optimises for performance and cost but increases interdependence. “You are stitching together execution across domains,” McCabe observes. “That requires deep visibility into traffic flows and the ability to manage them dynamically. Without automation, you cannot sustain that level of coordination.” Static network configurations, designed for predictable enterprise workloads, struggle under these dynamic patterns.
The telemetry generated by distributed AI also places pressure on infrastructure. Models constantly exchange updates, synchronise states and generate monitoring data. “AI traffic can be bursty and synchronised,” McCabe says. “It is not the steady-state behaviour traditional WAN designs were built for. If you rely on manual adjustments, you will always be reacting rather than anticipating.”
Workload mobility and the illusion of flexibility
Cloud strategy has long emphasised workload mobility. In theory, applications can move between environments to optimise cost or performance. In practice, AI workloads expose the fragility of that promise. Moving a stateless microservice is straightforward. Migrating a latency-sensitive, data-intensive AI service across domains is considerably more complex. “Mobility is often treated as a checkbox capability,” McCabe says. “For AI, it is an operational discipline that requires predictable connectivity and policy enforcement.”
Data gravity complicates the equation. Large training datasets anchor workloads in specific locations. Compliance requirements restrict where certain data can reside. Energy pricing influences where compute capacity is viable. The network becomes the mediator between these competing forces. “If you cannot guarantee performance characteristics across domains,” McCabe argues, “you cannot move workloads with confidence. That undermines the flexibility organisations think they have. The continuum may exist architecturally, but operational friction can make it effectively static.”
Many enterprises still manage networks through ticket-driven workflows and manual configuration changes. That model cannot scale to AI-driven traffic patterns. “Manual networking practices collapse under AI traffic patterns,” McCabe says. “You need continuous telemetry and automation. Otherwise, complexity accumulates until something breaks. Automation is therefore not an optimisation. It is a prerequisite for maintaining stability as AI services proliferate.”
“The implications extend to skills and governance. Network teams must understand AI workload characteristics. AI teams must appreciate network constraints. Without shared visibility, each side optimises locally and degrades the system globally. Operating the continuum demands cross-functional fluency. You cannot treat networking as a separate discipline from AI operations.”
Sovereignty, compliance and cross-domain orchestration
Regulation and geopolitics add another layer of complexity. Data sovereignty requirements complete localisation of data and control. Critical infrastructure providers must demonstrate resilience within national boundaries. These constraints fragment infrastructure into controlled domains. “Sovereignty is not a theoretical consideration,” McCabe explains. “It directly shapes where workloads can run and how they interconnect. The continuum must respect those boundaries.”
Interconnecting sovereign domains while maintaining performance and security demands sophisticated orchestration. Policy enforcement cannot rely on static rules. AI workloads evolve, models retrain and inference demand spikes unpredictably. “You need consistent control across domains,” McCabe says. “Security, segmentation and performance management must operate together. If they conflict, operations become unstable. The network’s control plane must therefore provide unified visibility across heterogeneous environments.”
Cross-domain orchestration also requires resilience planning. Distributed AI increases dependency on interconnection. If a link fails, workloads may need to re-balance dynamically. “Continuum resilience is not just about redundancy,” McCabe observes. “It is about intelligent failover that preserves performance characteristics. That requires automation embedded into the fabric. Designing for this level of adaptability forces organisations to reconsider long-standing operational assumptions.”
From architectural inevitability to operational maturity
The industry’s conversation about the cloud continuum has focused heavily on architecture. The more urgent issue is operational discipline. The continuum promises proximity to data, regulatory compliance and scalability, but those advantages materialise only when complexity is managed effectively. “You can design the continuum on a whiteboard,” McCabe says. “The harder part is running it day after day, under real workload pressure.”
Performance assurance must evolve accordingly. AI services demand deterministic behaviour rather than best-effort connectivity. Network analytics must move from reactive troubleshooting to predictive optimisation. “Understanding how AI traffic behaves allows you to anticipate congestion before it impacts services,” McCabe notes. “That shifts networking from a reactive function to a proactive control system. Embedding intelligence into the network fabric becomes central to sustaining distributed AI.”
Economic consequences sharpen this imperative. Degraded performance in a distributed AI service does not simply inconvenience users. It reduces revenue, increases operational cost and damages competitive standing. “When AI underpins customer experience or operational efficiency,” McCabe concludes, “network reliability becomes a board-level concern. The continuum is real. The question is whether organisations can develop the operational maturity to match it.”
The cloud continuum is no longer speculative. It is emerging across industries as AI expands beyond centralised environments. The decisive variable is not architectural imagination, but operational execution. Enterprises that treat networking as a dynamic, automated discipline will harness the continuum’s potential. Those that rely on legacy operational models will discover that distributed intelligence is far harder to run than it was to design.



