The power struggle behind the intelligence boom

Share this article

Artificial intelligence promises efficiency, optimisation and a cleaner digital future, yet the infrastructure supporting it is consuming energy at an unprecedented rate. The question facing the industry is not whether AI will scale, but whether the systems powering it can evolve fast enough to carry the weight of that ambition.

The conversation around artificial intelligence has shifted decisively from software capability to physical reality. Data centres are no longer neutral environments quietly supporting digital services. They are becoming industrial facilities designed to manufacture intelligence at scale, and that transition has placed energy at the centre of every strategic discussion. The power required to train and operate large AI models is redefining how operators think about location, resilience and long-term viability.

For decades, infrastructure planning focused on incremental efficiency gains. Server hardware improved, cooling systems evolved and operators squeezed more performance from existing designs. AI has disrupted that rhythm entirely. Workloads now run continuously, requiring dense clusters of accelerators operating with almost no tolerance for interruption. Electricity has shifted from operational background noise to the defining constraint shaping strategic decisions.

The paradox is difficult to ignore. AI systems are deployed to optimise industries and reduce waste, yet the facilities enabling those systems are placing growing pressure on energy networks already facing capacity and transmission challenges. Growth in computational demand is happening faster than grids can expand, forcing operators to reassess where infrastructure can realistically be built and how it can be powered sustainably.

This is not a theoretical debate. Across Europe and beyond, investment decisions are increasingly determined by energy economics rather than digital ambition. Regions with affordable, predictable power are attracting AI infrastructure, while those with higher costs or slower grid expansion risk falling behind. The energy conversation has moved from engineering teams to boardrooms.

Power as a strategic differentiator

For Kao Data, the relationship between compute and electricity is now inseparable. In discussions about AI infrastructure, the company argues that energy pricing and grid constraints have become decisive factors influencing where large-scale deployments occur.

“The UK has possibly the highest energy costs globally now and AI training, and any computing for that matter, is very dependent upon consumption of electricity,” Spencer Lamb, Managing Director and Chief Commercial Officer at Kao Data, says. “The more intensive that computing is, the higher the electricity bill. Countries with high energy costs become less attractive for organisations deploying large scale AI, so compute tends to be located where energy is cheaper.”

The implication is clear. AI infrastructure follows power, not policy. National strategies may articulate ambitions to lead in artificial intelligence, but operators ultimately build where the economics make sense. “When people talk about compute now, they directly link it to the amount of power it needs,” Lamb continues. “Utilities are becoming far more above the line than they ever have been in the world of technology. Power is key, and in many cases it is king.”

This shift is already influencing where infrastructure investment flows. Nscale’s decision to develop large-scale AI capacity in Norway reflects an alignment between renewable energy access and long-term infrastructure planning. “Real leadership in AI requires capability, deep infrastructure, scalable compute, trusted partners and a platform that can deliver when it matters most,” the company states in outlining its Stargate Norway project. “The regions that build this infrastructure now will shape the next generation of economic growth.”

The project’s reliance on renewable hydropower and closed-loop cooling is presented not as branding but as infrastructure logic. Abundant energy and predictable pricing become foundational design principles rather than secondary sustainability benefits.

Taken together, these perspectives point to a broader realignment. Geography in the AI era is defined less by proximity to users and more by proximity to energy. Regions unable to provide cost-effective and reliable electricity risk becoming consumers of AI services created elsewhere.

Efficiency inside the data centre

If energy access determines where AI facilities are built, operational efficiency determines whether those facilities remain viable over time. This is where Socomec places its focus, arguing that visibility into energy consumption is the starting point for sustainable operations.

Colin Dean, Managing Director at Socomec, believes the industry must first understand its own consumption patterns before meaningful optimisation can occur.

“Intelligent power monitoring should be a top priority for data centre operators seeking efficiency gains,” he says. “It enables better understanding and optimisation of power usage, and the latest systems can monitor energy consumption at a sub-tenant level, allowing operators and customers to gain visibility into their energy use and carbon footprint.”

Dean argues that this visibility changes decision-making. Operators can identify inefficiencies, understand load behaviour and improve Power Usage Effectiveness without compromising availability. In a world where AI workloads amplify every inefficiency, such insight becomes operationally critical.

The shift toward transparency is also being driven by regulation. Energy reporting requirements across Europe are increasing, placing pressure on operators to demonstrate measurable performance improvements rather than relying on headline sustainability commitments.

“Data centres must strike a sharp balance between uptime and efficiency,” Dean adds. “The next era of data centres is about high performance with a reduced footprint. Visibility generates insights that empower operators to take action and build resilience.”

Power protection technologies are evolving alongside monitoring systems. New generations of uninterruptible power supplies are capable of significantly higher efficiency, reducing losses that would otherwise translate into additional cooling demand. “While many UPS units operate at around ninety-five per cent efficiency, newer models can reach up to ninety-nine per cent,” Dean notes. “A refresh could deliver long-term benefits by reducing energy losses, cooling requirements and carbon emissions while maintaining continuous power protection.”

The message from inside the facility is consistent. Efficiency is no longer a secondary optimisation exercise. It is an essential strategy for managing the energy intensity introduced by AI workloads.

Resilience beyond the grid

Even the most efficient facility cannot operate without reliable supply. Across many regions, grid constraints are emerging as a barrier to growth, forcing operators to rethink the relationship between data centres and external power systems.

For Rolls-Royce, the answer lies in combining grid supply with autonomous generation and smarter energy management. “Major blackouts highlighted the vulnerability of grids,” Kevin McKinney, Vice President Powergen Sales Americas at Rolls-Royce Power Systems, says. “Critical infrastructure like data centres must be protected against outages and grid fluctuations, as these can lead to downtime, data loss and significant financial losses.”

McKinney points to a widening gap between demand and infrastructure readiness. “Data centres can be built quickly, but projects may be delayed or cancelled because grid expansion can take years,” he adds. “The rapidly increasing demand for electricity due to AI requires solutions now.”

Self-generation, once reserved for emergency backup, is increasingly being viewed as a strategic asset. Natural gas systems, dynamic UPS technologies and intelligent energy management platforms are being integrated to provide resilience and flexibility. “Own generation not only increases operational reliability but also strengthens grid stability and economic performance,” McKinney says. “We do not see one single solution. There will be different technologies depending on the region, circumstances and customer requirements.”

The sustainability debate becomes more nuanced within this context. Transitional approaches, including sustainable fuels and future hydrogen integration, are framed as pragmatic steps toward long-term decarbonisation rather than contradictions of it. The objective is continuity, ensuring that AI infrastructure can grow while grids adapt.

Energy intelligence will define the AI era

Across these differing perspectives, a common narrative emerges. Energy is no longer a supporting element of digital infrastructure. It is the organising principle around which AI strategy now revolves.

The paradox remains unresolved. AI promises to optimise economies and reduce inefficiency, yet the systems enabling that promise demand extraordinary levels of power. Operators are responding through smarter location choices, deeper operational insight and more resilient power architectures, but the scale of change required extends beyond the data centre fence line.

What becomes clear is that the future of AI will not be shaped solely by algorithms or hardware breakthroughs. It will be determined by how effectively energy systems and digital infrastructure evolve together. Regions that align policy, power and infrastructure will shape the next phase of economic growth. Those that fail to bridge the gap may find themselves importing intelligence rather than creating it.

The data centre is therefore undergoing a quiet transformation. It is no longer simply a place where servers reside. It is an active participant in the energy ecosystem, balancing consumption, resilience and sustainability in real time.

The energy paradox is not a temporary challenge. It is the defining narrative of the AI data centre revolution. Those who treat energy as a strategic asset rather than a constraint will determine where intelligence is manufactured, who controls it and how its value is distributed across the global economy.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...