Rethinking power resilience as AI pushes data centres to their limits

Share this article

Artificial intelligence is forcing data centres to reconsider the foundations of their electrical infrastructure. As workloads grow more power dense and uptime expectations tighten, operators are under pressure to deliver resilience while controlling costs and limiting the environmental footprint of ever larger facilities. One emerging approach, described by Clement Barthelmebs, Data Center Marketing Manager at Socomec Group, points to a growing debate about whether traditional redundancy models are still fit for the AI era.

The architecture in question, known as block redundant or Catcher design, challenges the assumption that full duplication of power systems is always necessary to achieve reliability. Instead of mirroring entire power streams, the approach introduces a mix of fully utilised primary paths and strategically deployed redundant blocks designed to take over instantly if problems occur.

Barthelmebs argues that the shift towards AI-driven infrastructure is intensifying scrutiny of energy and cost efficiency across data centre design. “The architecture effectively allows the end user to choose the redundancy level required in order to optimise CAPEX for the data centre, while maintaining fault tolerance and the possibility of simultaneous maintenance,” he said.

At its core, the model relies on static transfer systems positioned between uninterruptible power supply units and IT loads. During normal operations, workloads run on the primary paths. If a fault or maintenance event occurs, the transfer system automatically shifts the load to a redundant path, maintaining uninterrupted power delivery.

Balancing resilience and efficiency

Traditional designs often rely on fully duplicated infrastructure to guarantee resilience. While reliable, these systems can leave large portions of electrical equipment underused, increasing both capital and operational costs. The Catcher model aims to address this imbalance by allowing normal power streams to run at full capacity while maintaining fewer standby components.

Barthelmebs explained that a typical configuration might involve six normal power streams operating at 100 per cent utilisation, supported by one or two redundant streams ready to absorb load in the event of failure. The architecture can also combine static and automatic transfer switches to ensure seamless transition across both sides of IT racks, maintaining server-level redundancy throughout disruptions.

The implication is a system designed to preserve uptime without the full duplication of equipment. According to Barthelmebs, this can reduce the total amount of infrastructure required significantly. “When you scale up to 10 data halls, although STS equipment is required to connect the redundant block, it is still less equipment overall,” he said. “We see fewer transformers and gensets, fewer UPS and batteries, up to 30 per cent less equipment in total.”

He added that comparisons between a single STS Catcher architecture and a traditional 2N design show a potential global capital expenditure reduction of 42 per cent and a footprint reduction of 38 per cent.

Adapting to AI driven growth

The rise of AI introduces new dynamics into data centre planning. Compute clusters supporting training and inference workloads demand higher power densities and tighter tolerances for downtime. At the same time, energy efficiency and sustainability targets are becoming central to investment decisions and regulatory scrutiny.

Barthelmebs argues that flexibility is becoming a defining requirement as data centres scale. “The Catcher model can optimise redundancy while limiting investment costs,” he said. “Being highly flexible, it is the ideal solution in terms of adapting to the very specific and evolving needs of data centres.”

A further consideration is equipment compatibility, particularly as facilities integrate components from multiple vendors. Ensuring that UPS systems and transfer switches operate harmoniously is critical, he noted, especially during voltage variation or high-efficiency operating modes.

The architecture has been installed in the field for several years, with several hundred megawatts of deployments demonstrating reliability in operational environments, according to Barthelmebs. While the concept remains part of a broader discussion about data centre design evolution, its growing use highlights a shift towards models that seek to balance resilience, cost control and sustainability.

As AI expands from pilot deployments into core enterprise infrastructure, the pressure on power systems will only increase. The debate now facing operators is not simply how to add more capacity, but how to redesign electrical foundations so that resilience can scale without duplicating inefficiency. In that context, architectural choices once treated as engineering details are rapidly becoming strategic decisions that shape the economics of the AI era.

Related Posts
Others have also viewed

The processor everyone forgot is now running the AI economy

The AI boom has been framed as a triumph of acceleration, yet the system is ...

The network is no longer infrastructure it is the constraint on AI

AI is not failing at the model layer, it is failing in motion, in the ...

The data centre was not designed for AI

Artificial intelligence is being scaled inside buildings conceived for a different era of computing. What ...

The real limit of AI infrastructure is not compute, it is heat

AI infrastructure is being designed around performance metrics that assume unlimited scaling. The reality is ...