AI is pushing data centre networks to their physical limits

Share this article

Artificial intelligence is no longer just increasing demand for compute. It is forcing a reassessment of the networks that bind modern data centres together. As training and inference workloads scale, the tolerance for latency, congestion and packet loss is collapsing. A new technical milestone announced by Nokia highlights how the industry is beginning to respond.

Nokia has completed successful end-to-end testing of Ultra Ethernet Transport (UET) traffic across its data centre switch portfolio, working in collaboration with Keysight Technologies. The tests spanned multiple generations of Nokia’s high-performance switching platforms and validated support for Ultra Ethernet Consortium Specification 1.0, an emerging standard designed specifically for AI and high-performance computing workloads.

While the announcement is technical in nature, its implications are strategic. AI clusters are growing in size and complexity, and the network is increasingly the limiting factor in whether expensive compute can be used efficiently. In that context, the move toward Ultra Ethernet is less about incremental improvement and more about redefining what “Ethernet” needs to mean in the AI era.

Why AI workloads are breaking traditional Ethernet

Conventional data centre networks were built for general-purpose workloads where occasional congestion or packet loss could be tolerated. AI training and inference operate under very different constraints. Training jobs often run across thousands of GPUs, synchronised in real time. Even small amounts of packet loss or jitter can stall progress, forcing retransmissions and delaying job completion.

At the same time, bandwidth demands are rising sharply. AI clusters now require sustained, predictable throughput at massive scale, combined with ultra-low latency. These requirements are exposing the limits of existing Ethernet designs, even as Ethernet remains the dominant networking technology across data centres globally.

The Ultra Ethernet Consortium was formed to address this gap. Its Specification 1.0 defines a new UET layer intended to support lossless, low-latency operation for AI and HPC workloads, while preserving Ethernet’s advantages around interoperability and cost. For vendors and operators, the challenge is turning those specifications into deployable, verifiable systems.

Testing networks at AI scale

The tests conducted by Nokia and Keysight focused on validating UET traffic across Nokia’s 7220 Interconnect Router and 7250 IXR switch families. Using 800-gigabit Ethernet interfaces, Keysight generated UET traffic flows across a network spanning multiple switch variants, all running Nokia’s SR Linux network operating system.

Crucially, the tests did not occur in isolation. Alongside UET traffic, the setup also carried Remote Direct Memory Access over Converged Ethernet (RoCEv2) traffic using Data Center Quantized Congestion Notification (DCQCN). This demonstrated that emerging Ultra Ethernet capabilities can coexist with existing AI networking technologies, rather than forcing operators into abrupt, disruptive transitions.

Ram Periakaruppan, vice president and general manager of network applications and security at Keysight, described Ultra Ethernet as one of several approaches being developed to support the next generation of scale-out AI fabrics. His emphasis was on interoperability and verifiability, arguing that early, practical testing is essential if new standards are to move from paper into production environments.

For Nokia, the test reinforces a broader positioning around AI-ready data centre fabrics. Rudy Hoebeke, vice president of software product management at Nokia, said the successful demonstration provides evidence that the company’s switching portfolio is aligned with the direction of Ultra Ethernet and capable of supporting the performance expectations of large-scale AI clusters.

The network becomes a competitive differentiator

Beyond the immediate technical results, the announcement reflects a wider shift in how AI infrastructure is evaluated. As clusters move towards 100,000 accelerators and beyond, the network is no longer a background utility. It directly determines utilisation, cost efficiency and time to insight.

Ethernet’s ubiquity makes it a natural foundation, but only if it evolves fast enough. Ultra Ethernet represents an attempt to reconcile AI’s extreme requirements with the realities of large-scale, multi-vendor data centres. The successful end-to-end testing suggests that this evolution is beginning to take shape.

For organisations investing heavily in AI, the message is clear. Compute alone is not enough. As AI pushes data centres to their physical limits, the intelligence embedded in the network fabric may prove just as decisive as the models running on top of it.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...