Crusoe has announced plans to build a 900 megawatt AI data centre campus in Abilene, Texas, a project that underlines how artificial intelligence is driving a new phase of infrastructure expansion measured not in megawatts, but gigawatts.
The facility, designed to support large-scale AI workloads for Microsoft, will include two new buildings and an on-site power plant, taking the total projected capacity of Crusoe’s Abilene operations to approximately 2.1 gigawatts. Once complete, the site will rank among the most significant concentrations of AI compute globally.
The announcement reflects a broader shift in how AI infrastructure is being conceived. What were once data centres are increasingly being described as “AI factories”, purpose-built environments designed to sustain continuous, high-density computation at industrial scale.
Scale and speed redefine deployment
The Abilene development is notable not only for its size, but for the speed at which such infrastructure is now being delivered. Crusoe’s initial project at the site, comprising two 100 megawatt buildings, was completed and energised in under a year. A second phase, expanding the campus to 1.2 gigawatts across eight buildings, is expected to be completed by the end of 2026.
The new 900 megawatt campus represents a further step in that trajectory, with the first building expected to be operational by mid-2027. This pace of deployment points to an accelerating cycle in which demand for AI compute is forcing infrastructure providers to compress timelines that would previously have been considered impractical.
At the heart of this expansion is the need to support increasingly complex AI workloads. The new buildings are designed to deliver ultra-high-density compute, with each capable of supporting 336 megawatts of critical IT load, reflecting the requirements of next-generation GPU architectures.
Energy becomes the primary constraint
The defining feature of the project is its energy model. The campus will include a dedicated 900 megawatt on-site power plant, paired with battery energy storage systems to enhance reliability and resilience. This “behind-the-meter” approach allows the facility to operate with a degree of independence from the wider grid, addressing one of the central challenges facing AI infrastructure: access to sufficient, stable power.
As AI systems scale, energy availability is emerging as the primary constraint on growth. The integration of on-site generation into data centre design suggests a shift towards energy-first infrastructure, where power is not simply a utility input but a core component of system architecture.
Water and cooling are also central considerations. The Abilene campus will use closed-loop, non-evaporative liquid cooling systems, reflecting the need to manage thermal loads efficiently while limiting resource consumption.
The economic impact of such projects is becoming increasingly significant. The development is expected to create thousands of construction jobs and hundreds of permanent roles, while existing facilities at the site are already contributing a substantial share of local property tax revenues. With the expansion, those contributions are expected to increase further.
The scale of the Abilene campus illustrates a broader transformation underway in artificial intelligence. As demand for compute continues to grow, infrastructure is evolving from distributed networks of data centres into concentrated industrial complexes, integrated with energy systems and local economies.
This raises new questions about how such facilities are planned, governed and sustained. The concentration of capacity brings efficiencies, but also intensifies dependencies on power, land and regional infrastructure. It also reinforces the strategic importance of location, as regions compete to host the next generation of AI systems.
What is becoming clear is that AI is no longer simply a software layer. It is an industrial force, reshaping how infrastructure is built and where economic value is created.


