How AI could transform networks from cost centres into economic engines

Share this article

For decades enterprise and telecom networks have been treated as infrastructure overhead, a necessary expense that quietly connects applications and users. AI is changing that equation, transforming network performance from background utility into a direct determinant of revenue, competitiveness and strategic leverage.

For most of the internet era, the network was measured in terms of uptime and cost efficiency. It was rarely framed as a source of value in its own right. Chief financial officers saw it as a budget line to be optimised. Architects saw it as a transport layer to be scaled. The strategic conversation focused on applications, platforms and, more recently, data. The network was expected to work but rarely expected to differentiate.

Artificial intelligence (AI) is altering that balance in a fundamental way. As AI services become embedded in customer experience, operational optimisation and digital products, performance characteristics once considered technical details now influence business outcomes directly. Latency shapes responsiveness. Congestion affects throughput. Reliability determines service continuity. In an AI economy, these variables are not abstract metrics. They translate into revenue, cost and reputation.

Patrick McCabe, Director of marketing for AI networks, of Nokia argues that this shift forces executives to reconsider long-held assumptions. “Historically, networks were built to move data between endpoints,” he says. “With AI, the network becomes part of the value chain. It directly influences how quickly decisions are made, how accurately models respond and how reliably services are delivered. In that sense, the network is no longer simply connective tissue. It is a performance multiplier or a constraint.”

Performance now dictates service viability

AI services are acutely sensitive to performance variability. Large language models, recommendation engines and real-time analytics platforms rely on predictable data flows. Small fluctuations in latency or packet loss can cascade through distributed systems, amplifying inefficiencies. “AI workloads are intolerant of unpredictability,” McCabe explains. “When performance degrades, you do not just slow down an application. You affect tokens per second, job completion time and ultimately the user experience.”

This is particularly visible in generative AI deployments. Training clusters depend on high-bandwidth, low-latency interconnects to synchronise GPUs. If east-west traffic encounters congestion, expensive compute resources can become idle. “Idle GPUs caused by network inefficiency may become the most expensive failure mode in AI infrastructure,” McCabe notes. “You can invest heavily in compute and power, but if the network cannot keep up, the economics unravel. The cost of underperforming networks is therefore no longer marginal. It is material.”

Inferencing at scale amplifies the effect. Enterprises deploying AI assistants, fraud detection engines or predictive maintenance systems must deliver responses in real time. Delays undermine trust and usability. “Network performance now dictates whether an AI service is viable,” McCabe says. “If you cannot guarantee responsiveness, the service cannot compete. That linkage elevates networking decisions from operational tuning to strategic design.”

As AI penetrates customer-facing services, this relationship becomes even more pronounced. Retailers rely on real-time personalisation. Financial institutions depend on low-latency risk scoring. Manufacturers use AI to coordinate robotics and supply chains. In each case, the network mediates between data, model and action. “When the network underperforms,” McCabe argues, “you erode the value AI is supposed to create.”

Inverting traditional WAN economics

Traditional WAN economics were shaped by predictable enterprise traffic such as email, ERP transactions, web browsing and file transfer. AI traffic behaves differently, and understanding where that difference occurs is critical. The most intense traffic bursts are often generated inside the data centre itself during model training, where thousands of GPUs must exchange large volumes of synchronised data. These so-called elephant flows can saturate switching fabrics and create severe congestion if the underlying network architecture is not designed to handle them.

Once training is complete, however, the traffic pattern changes. The trained model is typically distributed across edge locations where inference occurs close to users or operational systems. In this phase, the challenge is less about massive internal data exchange and more about delivering consistent performance and low latency between the edge and the core model environment. AI networks therefore experience two distinct economic pressures: extreme burstiness within the data centre during training, and strict latency requirements across the wider network during inference.

Bandwidth planning must therefore become far more dynamic. Provisioning based on historical traffic averages proves inadequate when model retraining or inference surges spike demand unexpectedly. Static over-provisioning is expensive, yet under-provisioning constrains service performance. “You need granular visibility into how AI workloads consume network resources,” McCabe says. “Without that insight, you either waste capacity or create bottlenecks. The economic trade-off becomes more delicate as AI services scale.”

Cost models must also evolve. When network performance directly affects AI output, the return on investment calculation shifts. Spending on intelligent routing, optical capacity or automation may protect far larger investments in compute and application development. “It is no longer accurate to treat networking as a background cost centre,” McCabe argues. “It is an economic enabler that safeguards and amplifies AI investment. Framing it as discretionary spend obscures its impact on overall system productivity.”

This inversion also affects multi-cloud and regional deployment strategies. AI services often operate across private infrastructure, public clouds and distributed edge locations. One increasingly practical approach is to anchor regional AI services around metro or regional Internet exchanges, which allow cloud providers and enterprises to deliver sovereign AI capabilities with high performance and scale within a specific geography. “If cross-cloud connectivity is sub-optimal,” McCabe observes, “you introduce hidden costs in latency, retransmissions and operational complexity. Enterprises must therefore assess not only compute pricing but the performance and reliability of interconnection. Network design becomes inseparable from AI economics.”

Designing for revenue, not just resilience

Historically, network resilience was measured in terms of uptime percentages and failover mechanisms. In an AI-driven enterprise, resilience acquires a revenue dimension. Outages do not merely interrupt communication. They stall AI-enabled processes that generate income or savings. If an AI-driven recommendation engine goes offline you lose conversions immediately. That is a direct revenue impact. The network’s reliability therefore influences top-line performance.

Design choices once considered conservative now appear strategic. Optical capacity planning, segmentation, telemetry and automation all shape how effectively AI services can scale. “The network must be engineered for deterministic performance,” McCabe says. “Best-effort transport is insufficient when AI underpins core operations. This requires deeper integration between networking and application teams.”

Automation emerges as a critical lever. AI workloads evolve rapidly. Traffic patterns shift with model updates, new use cases and changing demand. Manual configuration cycles cannot keep pace. “Closed-loop automation becomes essential,” McCabe argues. “You need the ability to adapt in real time as workloads fluctuate. Intelligent control planes that optimise routing and resource allocation dynamically reduce both operational cost and performance risk.

“The strategic implication is clear. Network architecture decisions influence the speed at which new AI services can be launched. Enterprises that can provision high-performance connectivity quickly gain agility. Those that cannot may find themselves constrained by legacy infrastructure. Network design is becoming a strategic business decision. It determines how fast you can innovate with AI.”

CFOs, boards and the new accountability

The transformation of networks into economic engines reshapes governance. CFOs must understand that AI return on investment depends on performance consistency. Boards evaluating AI strategy must consider connectivity alongside compute and data. Executives are beginning to recognise that network investment is tied to AI outcomes. The conversation is shifting from cost minimisation to value protection.

Metrics evolve accordingly. Instead of focusing solely on utilisation and uptime, organisations track job completion time, inference latency and application responsiveness. These indicators correlate more directly with revenue and efficiency gains. “When you measure network performance through the lens of AI output,” McCabe explains, “you see its economic contribution more clearly. The network becomes visible as a performance multiplier.

“Sovereignty and compliance considerations further elevate its importance. As AI services span jurisdictions, secure interconnection ensures regulatory adherence without sacrificing speed. Balancing sovereignty with performance requires sophisticated design. That complexity reinforces the need to treat networking as strategic infrastructure. Missteps can introduce legal risk alongside operational disruption.”

In this environment, the network ceases to be passive. It becomes an active participant in value creation. Decisions about topology, automation and capacity planning directly influence competitive positioning. Enterprises that internalise this reality invest accordingly. Those that cling to outdated cost-centre thinking risk underestimating the network’s role in AI success.

AI is not simply another workload. It reshapes how value flows through digital systems. The network sits at the centre of that flow, orchestrating data exchange, synchronising compute and enabling real-time intelligence. As McCabe concludes, “AI changes the economics of networking. Performance, reliability and latency now have business consequences. The organisations that understand that will design networks not as overhead, but as engines of growth.”

Related Posts
Others have also viewed

How AI could transform networks from cost centres into economic engines

For decades enterprise and telecom networks have been treated as infrastructure overhead, a necessary expense ...

The processor everyone forgot is now running the AI economy

The AI boom has been framed as a triumph of acceleration, yet the system is ...

The network is no longer infrastructure it is the constraint on AI

AI is not failing at the model layer, it is failing in motion, in the ...

The data centre was not designed for AI

Artificial intelligence is being scaled inside buildings conceived for a different era of computing. What ...