The AI infrastructure race is being framed as a global scramble for new power, new land, and new permits, yet one of the largest opportunities sits quietly inside data centres that already exist. Trillions are being committed to future capacity while enormous volumes of usable compute remain locked behind thermal limits that most boardrooms still struggle to see.
As AI workloads push power density far beyond legacy assumptions, the competitive battleground is shifting from megawatts acquired to megawatts realised. The defining question is no longer who can secure power fastest, but who can turn the power they already have into sustained intelligence.
For the past two years, the language of AI infrastructure has been dominated by scale. Gigawatts have become shorthand for ambition, with hyperscalers, governments, and investors competing for grid access as though power itself were the finished product. New campuses are announced years before they can be energised, while timelines stretch further into the future and political resistance to new builds grows louder by the month.
What this narrative largely ignores is a quieter, more immediate reality. A vast proportion of the global data centre estate is already powered, permitted, connected, and operational, yet unable to support modern AI workloads at meaningful density. Facilities that were commissioned as recently as two or three years ago are now struggling to accommodate accelerators that simply did not exist when their cooling architectures were designed. The result is a growing class of sites that look healthy on paper but are functionally constrained in practice.
According to Paul Quigley, President of AIRSYS Cooling Technologies, this is not an edge case but a systemic blind spot. “We are watching data centres that were considered best in class not long ago being quietly reclassified as legacy,” he says. “They are powered, permitted, and ready, but they cannot absorb AI without major disruption. The industry is chasing new gigawatts while ignoring the compute that is already trapped inside the fence.”
Powered and permitted is not a footnote
The phrase “powered and permitted” is starting to circulate more frequently, but its significance is still widely underestimated. These are not marginal facilities or obsolete assets. They are modern sites with contracted electricity, regulatory approval, and physical infrastructure already in place, yet unable to translate that power into AI-grade compute because their thermal envelope has effectively closed.
For operators, this creates an uncomfortable strategic dilemma. Rebuilding or abandoning facilities is capital-intensive and slow. Waiting for new power connections exposes organisations to multi-year delays that sit entirely out of step with AI demand. In many cases, neither option aligns with business reality.
“The industry has trained itself to think in binaries,” Quigley says. “Build new or start over. But that mindset simply does not work anymore. The fastest path to AI capacity today is not new land or new permits. It is unlocking the power you already own.”
That shift in thinking reframes the challenge entirely. The constraint is no longer electricity availability, but conversion efficiency. Power exists, contracts are signed, and substations are live, yet compute output stalls because heat cannot be removed fast enough to sustain higher densities. This is where long-standing performance metrics begin to mislead rather than inform.
Why PUE stopped being enough
For more than a decade, Power Usage Effectiveness (PUE) has been the industry’s primary benchmark. It helped drive meaningful improvements in operational efficiency, particularly during the cloud era when workloads were relatively homogeneous and thermal profiles predictable. In an AI-driven environment, however, PUE tells only a fraction of the story.
“At a glance, two data centres can look identical,” Quigley says. “Same utility power, same PUE, same headline efficiency. But one might deliver ten megawatts of usable compute and the other sixteen. The difference is not marginal, it is existential, yet PUE does not capture it at all.”
This is the problem of stranded power. Electricity that has been paid for, provisioned, and allocated, but cannot be converted into productive work because cooling systems hit their ceiling long before the electrical limit is reached. From a financial perspective, that stranded power represents lost revenue, lost competitiveness, and wasted capital.
It is for this reason that Quigley and others are pushing Power Compute Effectiveness (PCE) as a complementary lens. PCE shifts the focus away from overheads and towards outcomes, measuring how much sustained compute is produced per unit of power consumed.
“Once you start looking at data centres as AI factories, the logic becomes obvious,” he says. “Power is the primary input. Compute is the output. Any serious investor would want to know how effectively that conversion is happening. PCE forces that conversation.”
The thermal ceiling nobody budgeted for
The underlying cause of stranded power is not a lack of innovation but a mismatch between workload evolution and infrastructure assumptions. Most existing facilities were designed around air-based or compressor-heavy cooling systems optimised for enterprise and cloud workloads that no longer define demand.
AI accelerators introduce heat densities that stretch those systems beyond their intended design envelope. Attempting to compensate by pushing more air or adding more mechanical cooling quickly becomes inefficient, expensive, and, in many cases, physically impractical within existing buildings.
“People assume they need to triple or quadruple cooling just to add AI,” Quigley says. “That leads them to conclude the facility is finished. It is not the data centre that is obsolete, it is the cooling philosophy.”
This distinction matters because the cost and disruption associated with traditional upgrades often outweigh the perceived benefit. Raising floors, installing new chillers, or reworking mechanical systems can involve structural changes that disrupt operations and inflate capital expenditure to prohibitive levels.
What is emerging instead are cooling approaches designed specifically to unlock density within existing constraints. By removing reliance on compressors and shifting heat management closer to the rack and chip, these architectures allow existing power to support dramatically higher compute output without wholesale reconstruction. “The goal is not to rebuild,” Quigley says. “It is to change how heat is handled so the same power delivers more value. When that happens, sites people had written off suddenly become strategically relevant again.”
Effectiveness is becoming the differentiator
As grid access tightens globally, effectiveness is overtaking expansion as the primary competitive advantage. Operators who can extract more compute from existing power move faster than those waiting for new connections, regardless of balance sheet strength.
This shift also exposes a gap in how success is communicated. The industry remains comfortable announcing megawatts secured, yet far less transparent about how much of that power is monetised through sustained AI workloads. “We call them AI factories for a reason,” Quigley says. “If power is the input, output matters. Most industries measure that relentlessly. Data centres are only just starting to catch up.”
Seen through this lens, PCE becomes more than a technical metric. It becomes a strategic signal that informs capital allocation, site selection, and long-term planning, particularly as governments and regulators begin to scrutinise how power is used rather than simply how it is sourced.
The irony is that much of the capacity needed for the next phase of AI already exists. It is not waiting on new grids or new permits. It is waiting for a different way of thinking about heat. “The race is not only about who builds the biggest campus,” Quigley concludes. “It is about who wastes the least power. In many cases, the most valuable megawatts are already inside the fence, waiting to be unlocked.”




