Cooling is no longer a supporting system within the data centre, it is becoming the mechanism that determines what can be deployed. As power densities accelerate beyond anything legacy environments were designed to handle, the ability to remove heat is shaping architecture, ownership, and ultimately the pace of AI adoption.
There is a persistent assumption in the industry that compute drives everything else. That assumption is no longer holding. What is emerging instead is a more uncomfortable reality, where cooling is not simply responding to demand, it is actively constraining it. Klaus Dafinger, Marketing Manager at Legrand Data Centre Solutions, frames the shift not as a technical adjustment but as a structural change in how data centres operate.
“So the more we go in HPC and AI, there is no clear line in the ownership anymore,” he explains. “In the past, the engineering team or the facility management were responsible for cooling decisions and maintenance. They were given a requirement per cabinet and asked to deliver cold air. Now, as densities increase and liquid cooling comes into play, it is no longer sufficient to say we need this much cold air. Cooling has a direct impact on the systems, on the chips themselves, and the responsibility shifts away from facility management towards IT and system ownership.”
That shift in responsibility is not cosmetic. It reflects a deeper integration of cooling into the compute layer itself, where thermal management is inseparable from system design. Once cooling moves onto the chip and into the rack, it stops being something that can be handled at arm’s length. It becomes part of the system architecture, and with that comes a redistribution of risk.
Density exposes the fault lines
The narrative that power and compute define the limits of infrastructure has been overtaken by events. What matters now is not just how much heat is generated, but how that heat behaves within increasingly dense environments. The rise of AI workloads has created thermal profiles that are far less predictable and far more concentrated than traditional enterprise systems.
“It is not only the increase in total load per cabinet: It is the hotspots,” Dafinger says. “The computing power has increased by a factor depending on the GPUs or CPUs used, and the way heat is generated is totally different. Traditional air cooling is not sufficient anymore. You need a combination of liquid cooling to remove the heat at the source and air cooling to handle the remaining load.”
This distinction between total load and localised heat is critical. Traditional cooling systems were designed to manage relatively uniform environments, where airflow could be distributed across a room or a row with reasonable effectiveness. That assumption breaks down as soon as a small number of racks begin to operate at significantly higher densities than their surroundings.
“You are no longer dealing with uniform heat distribution,” Dafinger continues. “If you have one or two cabinets with significantly higher density, traditional cooling solutions that operate at room or row level cannot respond precisely enough. That is where you start to see thermal risks, because the cooling power is spread across too large an area.”
The consequence is that cooling is no longer about scaling capacity in aggregate. It is about precision, about delivering cooling exactly where it is needed without destabilising the wider environment. That requirement fundamentally changes the types of systems that can be deployed and the way infrastructure must be designed.
Transition without a reset
One of the more complex realities facing operators is that there is no clean transition path from legacy environments to AI-ready infrastructure. The idea of a wholesale shift to liquid cooling is attractive in theory, but largely impractical in operational terms. Most data centres are not starting from a blank sheet of paper, and the installed base cannot simply be replaced.
“The majority of the market is not yet in the position to deploy direct chip cooling on a wide range,” Dafinger continues. “What they want is to have the infrastructure in place to be ready for it. They need to cover the span of what they have now and what they will have in three years, and that is the challenge.”
This creates a requirement for continuity rather than disruption. Operators are not looking for a single end-state solution, but for architectures that can support gradual evolution without forcing large-scale redesigns. That often means creating dedicated zones within existing facilities, where new cooling approaches can be introduced without affecting the rest of the environment.
“You create a zone within a data centre and say this is my AI or HPC zone,” Dafinger says. “You prepare this row of cabinets for future expansion. You put the infrastructure in place, the piping, the CDUs, and you equip it so that you can transition to direct chip cooling when needed. That allows you to move step by step rather than all at once.”
This hybrid reality is not a temporary phase. It is likely to persist for years, as different generations of hardware coexist within the same facility. The ability to manage that coexistence efficiently becomes a defining capability. “There will be no moment where everything switches from air to liquid,” Dafinger explains. “ You will always have a mix. You can have cabinets where part of the system is direct chip cooled and part is still air cooled. The infrastructure has to support that without creating inefficiencies or risks.”
The architectures that survive
Despite the proliferation of cooling technologies, certain patterns are beginning to emerge in real deployments. The market is not converging on a single solution, but it is converging on combinations that can handle both current requirements and future uncertainty.
“There is an architecture that is crystallising as the most sustainable and capable way of managing this transition,” Dafinger says. “That is the combination of direct chip cooling and rear door heat exchangers. It is not the only solution, but it is the most efficient way to bridge what you have today and what you will need tomorrow.”
The appeal of this approach lies in its flexibility. Rear door heat exchangers can operate efficiently across a wide range of densities, making them suitable for environments that are in transition. At the same time, they integrate naturally with liquid cooling systems, allowing operators to extend their capabilities without replacing existing infrastructure.
“The rear door heat exchanger can handle low densities very efficiently, but it can also scale to very high densities, above 90 kilowatts per cabinet depending on the setup,” Dafinger adds. “That means you can use it today and still use it when your requirements increase. At the same time, if you introduce direct chip cooling, you already have the liquid infrastructure in place, and you can connect both systems.”
The alternative approaches, while effective in specific contexts, tend to struggle when faced with the variability of real-world deployments. Systems designed for uniform environments or specific density ranges become inefficient or impractical as soon as conditions change.
“If you try to scale traditional solutions like CRAC units, fan walls, or in-row cooling to very high densities, you reach hard limits,” Dafinger explains. “They were never designed for 100-kilowatt cabinets or more. At that point, it is not a question of optimisation, it is a question of feasibility.” This is where the narrative around choice begins to narrow. While multiple technologies exist, the range of viable options shrinks rapidly as density increases and flexibility becomes a requirement.
A hybrid future, not a replacement
Industry messaging often frames liquid cooling as an inevitable replacement for air. The reality is more nuanced, and more constrained. Liquid cooling addresses a critical part of the problem, but it does not eliminate the need for air. “Liquid cooling is clearly the future for AI and HPC deployments, but it will never be the only solution,” Dafinger continues. “You always have components that are not liquid cooled, such as power and storage. There is always a part of the heat that must be removed with air.”
Even in highly optimised systems, that residual load is significant. As total densities increase, the absolute amount of heat that must be handled by air also increases, even if the proportion remains relatively small. “If you have a 200-kilowatt cabinet and 20 percent of that needs to be cooled by air, that is still 40 kilowatts,” Dafinger explains. “If you look at future systems with 600 kilowatts or more, the remaining air-cooled portion becomes very large in absolute terms. That cannot be handled by traditional air cooling systems.”
The implication is that hybrid architectures are not a stepping stone, but an endpoint. The combination of direct chip cooling and efficient air-based systems becomes the only viable way to manage both the source of heat and the residual load. “The winning strategy is hybrid,” Dafinger says. “You need liquid cooling to remove the heat at the source, and you need efficient air cooling to manage what remains. That is what we see in real deployments, and that is what will continue.”
Designing for what comes next
If cooling is now shaping what can be deployed, then the timing of design decisions becomes critical. Treating cooling as a downstream consideration is no longer viable, because the choices made at the facility level directly influence efficiency, scalability, and even feasibility.
Dafinger believes that the earlier you are involved in the design process, the more efficient the system can be. “Cooling is part of a bigger loop that includes water temperatures, chillers, and the overall facility design,” he says “If you can influence those parameters early, you can design a system that operates much more efficiently.”
One of the most significant levers is water temperature. Higher operating temperatures enable greater use of free cooling, reducing reliance on energy-intensive mechanical systems. Achieving that, however, requires coordination across the entire design.
“The higher you can run the water temperatures, the more efficient the system becomes,” Dafinger explains. “You can use more free cooling and reduce the energy required for chillers. But that is only possible if the system is designed for it from the beginning. If you come in at the end, you can manage the cooling, but you cannot optimise the whole system.”
This reinforces the idea that cooling is no longer a discrete discipline. It is part of an integrated system that spans compute, power, and facility infrastructure. Decisions in one area have direct consequences in another, and those interdependencies are becoming more pronounced as densities increase.
Cooling defines the outcome
The direction of travel is becoming difficult to ignore. As compute continues to scale, the ability to manage heat is emerging as the defining constraint on infrastructure. This is not simply a technical challenge, but a strategic one, because it determines what can be deployed and how quickly.
“If cooling is not planned correctly, you will face limitations very quickly,” Dafinger says. “You may have the compute and the power available, but you cannot deploy it because you cannot remove the heat. That becomes a direct limitation on what the data centre can deliver.”
That limitation is already visible in the market. Operators are encountering scenarios where demand for high-density systems exists, but the infrastructure cannot support them without significant modification. In those cases, cooling becomes the bottleneck not just for performance, but for growth. “We see customers who want to deploy higher density systems, but their existing environments cannot support it,” Dafinger concludes. “That is where planning and transition strategies become critical, because without them, you cannot move forward.”
The conclusion is not that cooling has become more important. It is that it has become decisive. The systems that define the next phase of AI infrastructure will not be limited by compute capability alone, but by the ability to sustain it. In that environment, cooling is no longer a background consideration. It is the factor that determines what is possible.



