As enterprise demand for generative AI accelerates, power delivery has become the silent crisis hiding behind GPU hype. The move to giga-watt-scale AI infrastructure requires not just more energy but also smarter, faster, and more modular ways of delivering it right down to the chip level.
From grid edge to GPU core, the race to rewire the world for AI is on. The modern data center is reaching physical and electrical limits once thought inconceivable. Every new generation of AI compute, particularly NVIDIA’s H100 and B100 architectures, arrives with exponential increases in power draw. A single NVIDIA HGX B200 rack now draws over 300 kW, ten times the load of a conventional CPU rack just a few years ago. In total, the forecasted AI compute infrastructure will require a leap from MW to GW, often within the same physical footprint.
This is not simply a question of scale. It is a redefinition of how energy is managed. The traditional cascade of transformers, AC/DC conversions, and intermediate voltage steps is no longer viable at these densities. Electrical losses, phase imbalance, and latency in dynamic load response become systemic risks.
Ralf Pieper, R&D Director at Delta Electronics’ Custom Design Business Unit, argues that the conversation must shift from supplying more power to supplying better power. “Traditional AC distribution simply does not scale efficiently with AI’s dynamic and unpredictable power needs,” he says. “It introduces phase imbalance, flicker, and distribution losses that are no longer acceptable at these densities. Our approach begins by fundamentally rethinking the pathway, from the grid transformer all the way to the GPU core.”
Every watt counts
In this emerging architecture, the industry’s long-term direction points toward HVDC (high-voltage direct current) as the backbone of giga-watt scale AI data centers. Pieper and his team at Delta are driving a transition away from bulky, multi-stage AC infrastructure toward highly integrated, digitally monitored HVDC power paths. Their goal is not merely efficiency but responsiveness: the ability to adapt power delivery in real-time to the highly erratic draw profiles of modern AI workloads.
The shift to 800V HVDC architecture is not theoretical. It is already happening. Delta’s modular power shelves now deliver up to 55 kW per rack unit using a three-phase AC input with an efficiency of over 98 per cent. These shelves interface directly with power capacitor units designed to respond to sub-millisecond spikes in demand, smoothing out the electrical noise that would otherwise ripple across the entire data hall.
Pieper explains that energy efficiency has become both a design imperative and a strategic advantage. “Energy efficiency is not just a feature; it is an engineering principle embedded in every level of our system,” he adds. “From the power entry point to the final DC-DC converters on the board, we optimise for loss, redundancy, and control.”
That control extends into every layer of the physical infrastructure. Whereas previous data center generations could tolerate inefficiencies in return for simplicity, the cost of poor power management in AI environments is no longer just financial, it is operational. The average AI server rack today consumes as much power as a small office building. A poorly tuned power profile can lead to tripped protection systems, delayed jobs, and downstream failures.
This is where the role of capacitive buffers becomes essential. Rather than relying solely on large central UPS systems, Delta distributes energy storage throughout the rack ecosystem. Capacitor shelves using supercapacitor and lithium-ion hybrid designs now provide rapid energy bursts, shaving power peaks and reducing upstream draw variance from 73 per cent down to six per cent in live test environments.
Digital twins and production realism
Power is not just being engineered. It is being simulated, modelled and tested at full production fidelity before any component is installed. In a sector where time-to-market is shrinking from 18 months to less than six, AI is not just the end-user of infrastructure; it is becoming the design partner.
Delta has begun embedding NVIDIA’s Omniverse platform into its product development cycle, using digital twins to model entire manufacturing processes before the first PSU rolls off the line. “We replicate the full production line digitally to pre-validate every aspect of manufacturing, performance, and thermal dynamics,” Pieper explains. “This is how we compress cycles while maintaining the quality required for high-stakes workloads.”
This integration of design and simulation does more than speed development. It enables Delta to deliver highly customised power systems aligned to each customer’s compute and thermal profile. Rather than offering generic racks and power delivery units, the company can now configure complete grid-to-chip pathways, from transformer specification to onboard voltage regulators, for specific AI models and runtime characteristics.
It also facilitates predictive maintenance and lifecycle optimisation. Because the manufacturing process is modelled in parallel with product usage, the same data structures can be used to feed machine learning models that anticipate when a capacitor shelf might degrade or when airflow patterns will begin to compromise thermal performance.
This tight loop between design, deployment, and monitoring represents a shift in how infrastructure is built. It is not static. It is an evolving platform that improves over time, even after deployment.
Edge of failure, edge of innovation
As more AI training workloads move to hyperscale environments, the distribution of power becomes not only a technical concern but a strategic constraint. Rack design, component placement, airflow, and backup redundancy all become variables that directly impact whether an organisation can scale its models.
“There are still major blind spots,” Pieper says. “If 75 per cent of a rack is occupied by compute, you have no space for backup batteries, peak capacitors or additional power shelves. You cannot just scale up. You must scale out horizontally, intelligently, and with modular infrastructure.”
The result is a new generation of ‘side-power racks’, auxiliary structures that live beside primary compute racks, equipped with their own HVDC converters, power capacitors, and monitoring systems. These side units allow power to be delivered independently but synchronised with compute dynamics. They also offer another advantage: they decouple power from compute lifecycles, enabling flexible upgrades as GPUs evolve.
Delta’s approach involves not just physical proximity but digital orchestration. The side-power units are linked to shelf controllers and monitoring nodes that integrate with broader BMC (Baseboard Management Controller) frameworks and OpenBMC stacks. This ensures real-time telemetry, failure isolation, and firmware updates across the entire grid-to-GPU pathway.
Redundancy, too, is evolving. Instead of treating UPS systems as a last-resort measure, Delta incorporates partial backup capabilities directly into power shelves and capacitive buffers. These units operate in tandem with grid-interactive UPS systems that offer not only failover but grid support functionality, such as frequency regulation and reactive power injection.
The result is an ecosystem that can operate as both a consumer and stabiliser of energy, supporting data center uptime while also contributing to the resilience of the surrounding grid.
The next bottleneck is not silicon
For all the focus on GPU innovation, the ability to deploy and scale AI may depend more on electrical architecture than chip design. Models will not fail because they lack parameters; they will fail because the data center cannot feed them.
Pieper argues that the industry must stop thinking about power as a constraint and start thinking about it as a co-processor. “Power is no longer a passive background function,” he says. “It is a dynamic part of the AI stack, and its role must be treated with the same strategic priority as networking, storage, or compute.”
This reframing puts pressure on infrastructure teams to collaborate more closely with AI developers. In traditional IT, power was someone else’s problem. In the age of AI, it is everyone’s problem. From GPU-aware power shelves to software-orchestrated grid interfaces, the energy system is being woven into the fabric of AI itself.
It also challenges procurement and capacity planning norms. Instead of sizing infrastructure based on static metrics like average draw, teams must account for peak-to-average ratios, thermal gradients, and load transients over microsecond windows. Power delivery is no longer an envelope; it is a curve, and it must be managed accordingly.
Designing for AI’s second act
Looking ahead, Pieper sees the next phase of AI development pushing even harder against legacy power paradigms. As chips move toward single-stage voltage conversion, bypassing intermediate 12V and 50V stages entirely, the need for near-chip conversion and intelligent power distribution will only grow. Delta’s roadmap already includes board-level transformers (TVRs) capable of delivering 0.8V directly from 50V rails, thereby reducing conversion loss and enhancing response time.
These innovations extend to tray-level systems, where power distribution boards (PDBs) now use horizontal cold plate cooling and parallel DC-DC modules to manage power density exceeding 500W per cubic inch. The result is higher efficiency, lower noise, and finer control, delivered at the point of compute rather than the edge of the rack.
HVDC-native designs are also enabling new integration models. By operating data center infrastructure at 800V DC and stepping down only when required, system designers can eliminate multiple stages of conversion, reduce cabling overhead, and simplify grounding schemes. This approach aligns well with carbon neutrality goals, as it allows easier integration with renewable sources and solid-state transformers capable of grid-friendly behaviour.
“We are entering a new class of data center,” Pieper concludes. “The giga-watt era will not be solved by stacking more hardware. It will be solved by reengineering how energy, computation, and time intersect.” In that equation, power is not the problem. It is the platform.




