AI infrastructure is no longer being shaped by software ambition or commercial demand, but by the hard limits of heat, power and materials. Liquid cooling has moved from an engineering option to a structural dependency inside the AI value chain.
For much of the past decade, cooling was treated as a secondary problem in data centre design, something to be optimised once compute, storage and networking had already been specified. That hierarchy no longer holds. AI has not simply increased power density, it has fundamentally altered the thermodynamic profile of modern silicon, pushing cooling out of the facilities domain and into the core of product design itself, where it now constrains what processors can realistically become.
“The processor heat loads that are coming out of the silicon industry, specifically for AI, have gotten to a point where the only way to manage the thermal load is with direct to chip liquid cooling,” Richard Whitmore, CEO of Motivair by Schneider Electric, says. “The densities we are seeing now mean that air simply cannot remove heat fast enough from the silicon itself, regardless of how aggressive the airflow strategy becomes. We are no longer talking about optimisation, we are talking about basic physical feasibility.”
What makes this shift structurally significant is that it is not driven by infrastructure ambition, but by materials science. The thermal envelope of modern AI silicon is no longer negotiable, and cooling now sits inside the product development loop rather than outside it. The industry is no longer designing data centres around compute, it is designing compute around what cooling can physically sustain.
“I do not think air cooling is disappearing any time soon,” Whitmore continues. “It will be used for as long as most of us will be in the industry, but it now plays a very different role. Liquid cooling is removing around seventy to eighty per cent of the heat from the servers, but the remaining heat is still being rejected to air, and that residual heat is still significant.
“We are talking about air-cooled racks at forty kilowatts, which is already high-performance precision cooling by any historical standard. So liquid cooling alone does not enable AI, it has to be part of a complete thermal system where air still plays a critical role.”
The consequence is not a technology transition but a reordering of thermal responsibility. Liquid becomes the primary extraction mechanism at the silicon level, while air becomes the secondary system that manages what remains, closing the loop between chip physics, server design and facility architecture in a way that did not exist even five years ago.
Choosing the cooling architecture
Once liquid becomes unavoidable, the industry encounters a second-order problem that is often underestimated. Liquid cooling is not a single technology, but a fragmented ecosystem of approaches that each carry different assumptions about space, power, operational risk and capital investment. Rear door heat exchangers, direct to chip systems, immersion and liquid-assisted air all coexist, and none of them represent a universal answer across the diversity of real-world data centre environments.
“The deployment is customer driven,” Whitmore explains. “It depends on where they are in their journey, what assets they already have installed, and what they need to support. If there is usable air infrastructure, we will leverage that. If it is greenfield, we design very differently. It also depends on the server technology itself, because different processors require very different levels of cooling.
“Close-coupled cooling provides very short airflow paths and very efficient heat removal. But room cooling allows more flexibility where multiple units share the same system. It is really about what the customer already owns, what their strategy is, and how much disruption they can tolerate.”
In practice, cooling strategies emerge from inherited constraints rather than theoretical best practice. Operators rarely design thermal systems from a blank sheet, and most deployments reflect a negotiation between physical limits, financial exposure and organisational tolerance for risk.
“At the upper echelons of GPU and accelerator design, direct to chip liquid cooling is not optional,” Whitmore adds. “Those processors are designed on the assumption that liquid is present. You cannot meaningfully retrofit air back into that equation.”
Cooling is therefore no longer a facilities decision taken after hardware procurement. It is embedded in the assumptions of silicon itself, with infrastructure becoming part of the product specification rather than an external variable.
Designing from inside the chip
Historically, infrastructure adapted to hardware. Silicon teams built processors, server vendors packaged them into systems, and facilities engineers attempted to make the environment cope. That relationship has now inverted, with cooling becoming one of the upstream inputs into semiconductor design rather than a downstream response.
“We work directly with the silicon manufacturers,” Whitmore says. “We are looking inside the chips themselves to understand where the hot spots are, what the thermal challenges will be, and then we design the system outward from there. We are not reacting to products; we are co-developing alongside them.”
This changes the entire temporal structure of infrastructure engineering. Cooling systems now must exist before the chips they are designed to support, which in turn requires development cycles that mirror those of the semiconductor industry.
“We have to have cooling technologies available for these chips before they are available for sale,” Whitmore continues. “That is where a lot of our investment goes. We develop product that is ready in advance of the launch of the silicon. We invest heavily in R and D and strategic partnerships. We are in lock step with silicon roadmaps. If a manufacturer changes something or pulls a launch forward, we are usually the first phone call they make.”
Much of this work happens before physical chips exist, using predictive modelling rather than empirical testing. “We use thermal test vehicles and advanced modelling,” Whitmore explains. “We know how those chips will behave before they physically exist. That gives us confidence in predictability and performance.”
Liquid cooling therefore becomes not just a deployment technology, but a live experimental platform for future compute, where infrastructure is effectively part of semiconductor product development.
Fluids, standardisation and operational risk
Despite the scale of the shift, one of the most striking aspects of modern liquid cooling is how conservative the industry has been about fluids themselves. Rather than experimenting with exotic chemistries, the market has converged on relatively mundane materials that prioritise predictability over novelty.
“The vast majority of direct to chip liquid cooling uses single phase fluids, typically blends of water and propylene glycol,” Whitmore explains. “That is what the market has standardised on. It is predictable, scalable and safe. What has been validated and approved is water-based blends. There is research into alternatives, but what is in production today is what has been tested over time and proven reliable at scale.”
This convergence matters because it reduces one of the biggest perceived risks for operators, which is lock-in. While the cooling ecosystem remains fragmented at the system level, it is becoming increasingly standardised at the material and operational level.
“Liquid cooling has been around for decades,” Whitmore adds. “IBM was doing it in mainframes in the 1980s. The fluids are non-toxic, technicians are already trained on similar systems like chilled water and steam, and there are no special permits required. The remaining concerns tend to centre on serviceability and operational risk rather than on the physics itself. We use highly engineered connectors and CDUs with leak detection. If anything happens, it is detected immediately. The industry has passed the acceptance hurdle.”
From an organisational perspective, the challenge is less about technology and more about process. “Safety first is always the rule,” Whitmore continues. “We train field service teams, we provide documentation and procedures, and we ensure customers are supported throughout deployment. It is not fundamentally different from any other critical infrastructure environment.”
Liquid cooling introduces proximity rather than novelty, moving thermal systems closer to electronics without fundamentally changing the underlying safety discipline.
Brownfield reality and heat reuse
While much of the public narrative around AI infrastructure focuses on new hyperscale campuses, a large proportion of liquid cooling deployments still take place inside existing facilities. Brownfield is not an edge case; it is the dominant reality for most organisations trying to integrate high-density AI workloads into legacy estates.
“We have been retrofitting liquid systems into existing infrastructure for over a decade,” Whitmore explains. “Many of the world’s largest supercomputers were deployed in brownfield environments. We have deep experience doing this where people would not expect. We use liquid to air systems, rear door heat exchangers and hybrid architectures to bridge gaps. We maximise existing assets first, then modernise over time. It is not just about cooling, it is about power, space and investment protection.”
This evolutionary approach also shapes how heat reuse is treated. While liquid makes reuse technically easier, economic value remains difficult to realise in most geographies.
“Liquid allows easier transport of heat and opens up new possibilities,” Whitmore says. “But value only exists if there is a user for that heat. We can create supply, but we must match it with demand. The ecosystem must exist.”
For most operators, heat reuse remains opportunistic rather than strategic, constrained by geography, planning and industrial alignment. “For now, our priority is reducing the energy overhead of removing heat,” Whitmore adds. “If reuse is possible, we support it, but efficiency comes first.” Heat reuse therefore remains a secondary outcome rather than a primary design driver, even as sustainability narratives continue to emphasise its potential.
What changes next
Despite the pace of AI development, Whitmore sees continuity rather than disruption in the near term. The direction of travel is already clear, and most of the remaining innovation happens inside supply chains rather than in headline technologies. “Everything we see suggests that direct to chip single phase liquid cooling will dominate,” he says. “That is where silicon is heading, and that is where we are focused. We continue to expand product ranges, adapt to new rack designs and validate new environments. Most of this is invisible to end users, but it is what allows global deployment.”
The real differentiator, in his view, is not technology but execution. “Our clients deploy globally,” Whitmore concludes. “That means manufacturing, servicing and supporting globally. Cooling only works if the ecosystem around it works.”
The deeper consequence is that AI has quietly pulled infrastructure into the product development loop. Cooling is no longer downstream. It is upstream, shaping what silicon can even attempt to become. The future of compute is now bounded by thermal engineering as much as by algorithms, and liquid cooling is where those boundaries are being actively rewritten.




