Why the next phase of AI will be built in gigawatts not models

Share this article

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter as much as algorithms. A new expansion of collaboration between NVIDIA and CoreWeave underlines how rapidly that shift is taking place, and how the race to deploy AI at global scale is now defined by data centres measured in gigawatts rather than clusters or racks.

The two companies have announced a deepening of their long-standing relationship to accelerate the buildout of more than five gigawatts of AI factories by 2030. Alongside this operational alignment, NVIDIA has invested $2 billion in CoreWeave through the purchase of Class A common stock, signalling confidence in CoreWeave’s role as a cloud platform built natively around NVIDIA infrastructure.

The announcement reflects a wider reality facing the AI sector. Demand for compute continues to grow at a pace that traditional cloud expansion models struggle to match. Training, fine-tuning and deploying large AI systems now require tightly integrated environments where hardware, software and operations are designed together from the outset.

AI factories replace generic cloud

Central to the expanded collaboration is the concept of AI factories, purpose-built facilities designed to run accelerated computing workloads at scale. These are not general-purpose data centres retrofitted for AI, but environments engineered around GPUs, high-speed networking and AI-native software stacks.

Under the agreement, CoreWeave will develop and operate AI factories using NVIDIA’s accelerated computing platforms to meet customer demand. NVIDIA will also use its financial strength to support CoreWeave’s procurement of land, power and shell infrastructure, a recognition that access to energy and suitable sites has become one of the primary constraints on AI growth.

This focus on physical buildout highlights how AI infrastructure has become a strategic asset. As AI systems move into large-scale production, the ability to bring capacity online quickly and reliably is increasingly decisive. The partnership positions CoreWeave as an execution-focused operator capable of translating NVIDIA’s roadmaps into deployed, usable capacity.

Aligning software with silicon

Beyond bricks and power, the collaboration places strong emphasis on software and reference architectures. NVIDIA and CoreWeave plan to test and validate CoreWeave’s AI-native software and reference designs, including SUNK and CoreWeave Mission Control. The intention is to unlock deeper interoperability and work towards incorporating these elements into NVIDIA’s reference architectures for cloud partners and enterprise customers.

This reflects an important shift in how AI platforms are being built. Rather than treating infrastructure and software as separate layers, the industry is moving towards vertically aligned systems where operational tooling, scheduling, monitoring and optimisation are tightly coupled with the underlying hardware.

CoreWeave will also deploy multiple generations of NVIDIA infrastructure across its platform, including early adoption of upcoming architectures such as the NVIDIA Rubin platform, NVIDIA Vera CPUs and NVIDIA BlueField storage systems. Early access to new hardware generations is increasingly critical as customers seek both performance gains and lower costs for inference at scale.

Michael Intrator, co-founder and chief executive of CoreWeave, framed the collaboration around a single principle: AI succeeds when software, infrastructure and operations are designed together. He pointed to growing market demand as AI systems transition from experimentation into production environments where cost, reliability and throughput become paramount.

Infrastructure as the new bottleneck

Jensen Huang, founder and chief executive of NVIDIA, described the moment as the largest infrastructure buildout in human history, driven by AI’s next frontier. While such language reflects the scale of ambition, the underlying message is clear. The limiting factor for AI adoption is no longer innovation in models alone, but the ability to deploy vast amounts of compute efficiently.

The expanded collaboration between NVIDIA and CoreWeave illustrates how AI is reshaping the priorities of the technology industry. Capital investment, power availability and execution velocity now sit alongside research as defining competitive advantages.

As AI systems become embedded across industries, from science to finance to manufacturing, the question is no longer who has the most advanced model. It is who can build, operate and sustain the infrastructure required to run those models at global scale.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...