The vision of a commercially viable quantum computer has long resided in the realm of the theoretical, but a new generation of accelerated systems is bringing it within reach. This week, NVIDIA revealed how its GB200 NVL72 architecture is driving dramatic breakthroughs across five of the most complex and computationally intensive quantum computing workloads, laying the groundwork for future hybrid quantum-AI supercomputers.
Built on NVIDIA’s Blackwell platform and featuring its fifth-generation NVLink interconnect, the GB200 NVL72 delivers all-to-all GPU connectivity at unprecedented bandwidth. The impact, according to NVIDIA, is already being felt across a range of quantum research domains, from simulating quantum error correction to generating synthetic training data for AI control models.
What sets this development apart is not just the scale of performance, up to 4,000 times faster in some scenarios, but the architectural flexibility it enables. In a field often constrained by brittle hardware or isolated point solutions, this kind of general-purpose acceleration may prove decisive.
Accelerating what cannot be built yet
Quantum computing’s near-term challenges are defined by absence: missing hardware, noisy qubits, unreliable outputs. Overcoming these requires large-scale simulation, which in turn demands enormous compute resources.
For instance, developing new quantum algorithms, a prerequisite for any future application, involves simulating the behaviour of quantum circuits at scale. Working with engineering software company Ansys, researchers using Denmark’s DCAI Gefion supercomputer have achieved an 800-fold speedup over traditional CPU methods, thanks to the GB200 NVL72’s capacity to run NVIDIA’s cuQuantum libraries at full bandwidth.
A similar story unfolds in hardware design. Quantum systems are plagued by noise and error, so simulating how qubits degrade under real-world conditions is essential. With the NVL72 and NVIDIA’s dynamics library, developers like Alice & Bob are now able to run these simulations 1,200 times faster, enabling new forms of iterative design.
Quantum needs data but AI needs more
One of the more novel integrations lies in using simulated quantum outputs to train AI models, the same AI models that will later be used to stabilise or manage quantum hardware.
While real quantum machines remain too scarce or fragile for data generation at scale, the GB200 NVL72 can produce equivalent training data at speeds up to 4,000 times faster than CPU-only methods. This enables machine learning tools to be developed in parallel with the quantum systems they will eventually govern.
The implications stretch beyond quantum. It is a feedback loop where AI helps run quantum computers, and quantum simulations help train AI. This convergence is likely to define the future architecture of many advanced supercomputing platforms.
Toward hybrid quantum-AI infrastructure
A central challenge in deploying quantum computing commercially is building algorithms that can straddle both quantum and classical computing environments. Hybrid models do just that, allocating sub-tasks to whichever hardware type is best suited.
NVIDIA’s CUDA-Q platform enables such hybrid execution by drawing on the NVL72 system to simulate quantum environments alongside high-performance classical computation. It’s a test bed for future applications in chemistry, finance and machine learning. Initial use cases suggest up to 1,300 times acceleration in hybrid algorithm development.
The most critical application may be in error correction, without which full-scale quantum computing is impossible. Here, GB200 NVL72 accelerates decoding algorithms, used to interpret and repair qubit outputs, by up to 500 times. These processes must run in real time and scale to terabytes per second, something only GPU acceleration can deliver feasibly.
Building the path to quantum utility
This is not merely theory. Qubit designer Diraq has announced it is using the DGX Quantum reference architecture to integrate its spins-in-silicon technology with NVIDIA GPUs. At the same time, academic researchers are being onboarded through the NVIDIA CUDA-Q programme to experiment with real-world deployment scenarios using NVL72.
In a sector long hampered by uncertainty, cost and hype, the ability to simulate, test and optimise across architectures, using infrastructure that already exists, marks a quiet inflection point.
Quantum computing may still be years from general commercial use, but the architecture that will make it viable is coming into focus. As classical and quantum paradigms continue to converge, the race to useful quantum is no longer just about qubits. It is about systems that are AI-ready, simulation-rich and architecturally open, and those are beginning to arrive.




