Liquid cooling moves from option to requirement in large scale data centre builds

Share this article

The physical limits of traditional data centre design are being tested by the rapid expansion of artificial intelligence. As AI workloads drive rack densities higher and thermal loads beyond what air cooling can reliably manage, operators are being forced to make structural decisions about how future facilities are built. Increasingly, those decisions point in one direction: liquid cooling as core infrastructure rather than a specialist add-on.

That shift is illustrated by a newly announced 300-megawatt order for coolant distribution unit capacity from a major US-based data centre operator to LiquidStack. The multi-site deployment will support the expansion of AI-ready facilities across the United States and reflects growing confidence that liquid cooling is now essential for high-density AI environments.

While the customer has not been named, LiquidStack describes it as a long-established operator with a growing portfolio of AI-ready data centres. The scale of the order, equivalent to hundreds of megawatts of cooling capacity, suggests that decisions once made rack by rack are now being taken at portfolio level, with long-term implications for design, procurement and operations.

From pilot projects to platform decisions

For much of the past decade, liquid cooling has been treated cautiously. Operators tested it in isolated environments, often limited to high performance computing clusters or specific experimental workloads. AI has changed that equation. The power density of modern accelerators, combined with the economic value of maximising utilisation, has made thermal management a first-order concern.

The order centres on LiquidStack’s CDU-1MW platform, a high-capacity coolant distribution unit designed to support rapid deployment and future scalability. By standardising on megawatt-scale CDUs, operators can build repeatable cooling blocks aligned with aggressive expansion timelines, rather than custom-engineering solutions for each site.

The fact that the order spans multiple locations is significant. It suggests that liquid cooling is being embedded into baseline design assumptions for new AI-ready data centres, rather than introduced as an exception. For operators racing to bring capacity online, this kind of standardisation reduces risk and shortens deployment cycles.

LiquidStack said its manufacturing and delivery capabilities were a factor in securing the deal, enabling accelerated fulfilment to match the customer’s build-out schedule. In an environment where AI demand is moving faster than grid upgrades or construction capacity, supply chain reliability has become a competitive differentiator.

Cooling becomes an AI strategy question

The growing scale of liquid cooling orders points to a deeper shift in how operators think about AI infrastructure. Cooling is no longer a facilities issue delegated to engineering teams, but a strategic constraint that shapes where AI workloads can run and how quickly they can scale.

Joe Capes, chief executive of LiquidStack, described orders of this size as an inflection point, arguing that operators are now committing to liquid cooling as core AI infrastructure. His comment reflects a broader industry view that air cooling alone cannot sustain the densities required for next-generation AI training and inference.

LiquidStack’s CDUs are designed to integrate with direct-to-chip and hybrid liquid cooling architectures, providing precise thermal control and high availability. These approaches are increasingly seen as necessary to protect high-value hardware and ensure consistent performance, particularly as AI systems run continuously at high utilisation.

The announcement also follows Liquids tack’s inclusion on NVIDIA’s recommended vendor list for CDUs, underlining the growing alignment between hardware vendors and cooling specialists. As AI platforms become more vertically integrated, choices about cooling, power and physical layout are being pulled closer to the silicon roadmap itself.

Scaling infrastructure to match AI ambition

Beyond the immediate commercial significance, the deal highlights how AI is reshaping data centre economics. A 300-megawatt cooling order implies facilities designed for sustained, high-density operation over many years. These are not speculative builds but long-term bets on AI demand continuing to grow.

LiquidStack’s recent expansion of its manufacturing capacity in Carrollton, Texas, points to expectations that such orders will become more common. As more operators transition from pilots to full-scale AI platforms, demand for industrialised liquid cooling is likely to increase further.

What remains unresolved is how quickly the rest of the data centre ecosystem can adapt. Power availability, water usage, operational skills and regulatory scrutiny all intersect with the adoption of liquid cooling at scale. For now, however, the direction of travel is clear.

As AI workloads push infrastructure beyond historical limits, cooling has moved from background consideration to enabling technology. Large, multi-site commitments such as this one suggest that, for many operators, the debate is no longer whether to adopt liquid cooling, but how fast it can be deployed across an entire estate.

Related Posts
Others have also viewed
Into The madverse podcast

Episode 28: The trust gap is the real AI bottleneck

In this episode of Into The Madverse, Mark Venables speaks with Daniel Meyer, Chief Technology ...

How the data centre is being redesigned around the needs of intelligent systems

The rapid rise of artificial intelligence is forcing a fundamental rethink of how data centres ...

AI is forcing a hard rethink of how data centres are powered

Every week brings fresh announcements about the expanding capabilities of artificial intelligence. New models arrive, ...

Infrastructure is being pushed to the ends of the earth

Advanced digital services are no longer confined to metropolitan data centre clusters or hyperscale campuses ...