The race to cool artificial intelligence moves to the back of the rack

Share this article

Artificial intelligence is reshaping data centre engineering in unexpected ways, forcing operators to confront a growing physical constraint that software innovation alone cannot solve. As computing density accelerates, the challenge is no longer simply delivering more processing power but removing the heat generated by it. Rear door heat exchangers are emerging as one response to that pressure, signalling a shift in how cooling is being deployed at scale, a trend highlighted in recent industry analysis from infrastructure specialist Legrand.

Rear door heat exchangers, known as RDHx systems, replace the standard rear door of a server cabinet with an integrated heat exchange unit designed to capture hot exhaust air directly as it leaves equipment. Rather than cooling an entire room, the system removes thermal energy at source, returning air to the data hall at near ambient temperature.

The approach reflects a broader change driven by artificial intelligence and high performance computing workloads. Rack densities that once averaged only a few kilowatts now routinely exceed 50 kilowatts in AI environments, with some deployments operating above 200 kilowatts per rack. Traditional perimeter cooling systems struggle under these conditions, creating hot spots and inefficiencies that increasingly limit further expansion.

By positioning cooling directly behind servers, RDHx technology reduces reliance on large-scale room cooling infrastructure while maintaining compatibility with existing air-cooled IT equipment. The result is a hybrid model that sits between conventional air cooling and full liquid cooling architectures.

Cooling moves closer to computation

The technical principle behind rear door heat exchangers is straightforward but consequential. Hot air produced by servers flows immediately into a coil system where chilled water absorbs thermal energy. The cooled air is then discharged back into the data centre environment, preventing heat accumulation and stabilising operating temperatures.

Systems fall broadly into two categories. Passive units rely on server fans to push air through the exchanger and typically support moderate densities of up to 20 kilowatts per rack. Active systems incorporate independent variable speed fans, enabling cooling capacity from 15 kilowatts to more than 200 kilowatts per rack, levels associated with AI training clusters and high performance computing installations.

Because the technology replaces an existing cabinet component rather than adding standalone equipment, it introduces no additional floor footprint. This characteristic is becoming increasingly important as operators attempt to maximise computing capacity within fixed real estate constraints.

Compatibility with existing infrastructure also allows gradual deployment. Facilities can introduce rack-level cooling incrementally without redesigning entire data halls, extending the lifespan of air-cooled environments while preparing for higher density workloads.

Efficiency pressures reshape infrastructure decisions

Artificial intelligence is amplifying demands for both performance and efficiency. Cooling systems must now respond dynamically to fluctuating thermal loads created by variable compute activity. Active RDHx systems adjust fan speeds automatically according to heat output, aligning energy use more closely with real workload conditions.

This targeted approach can reduce dependence on additional computer room air conditioning or air handling units, lowering overall energy consumption and supporting improved power usage effectiveness. The technology also addresses a growing operational constraint: limited space for expanding traditional cooling infrastructure in existing facilities.

Integration with chilled water systems operating typically between 14 and 28 degrees Celsius enables compatibility with standard cooling loops, while features such as quick disconnect connections and leak detection sensors support maintenance and operational safety.

The implications extend beyond efficiency metrics. Artificial intelligence workloads often require precise thermal stability to maintain performance consistency and hardware reliability. Cooling variability increasingly translates into computational risk, making thermal management a strategic rather than purely operational concern.

A bridge between cooling eras

Rear door heat exchangers illustrate how data centre design is evolving through incremental adaptation rather than abrupt replacement of existing technologies. Air cooling remains central to most facilities, yet liquid-assisted approaches are increasingly necessary to sustain modern compute densities.

The technology is particularly suited to AI and machine learning clusters, high performance computing environments, colocation facilities with mixed tenant requirements and retrofit projects where space limitations prevent large-scale infrastructure expansion.

As artificial intelligence continues to push hardware towards higher power densities, cooling strategies are becoming inseparable from computing strategy itself. The emergence of rack-level heat removal suggests that the future of AI infrastructure may depend less on how data centres are cooled as a whole and more on how precisely heat can be managed at the point where computation occurs.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...