The future of sustainable AI demands radical thinking, embracing adaptability, decentralisation, and new energy solutions. Mark Venables examines how innovative data centre designs are tackling the environmental pressures posed by AI’s rapid growth.
The rapid evolution of artificial intelligence (AI) has sparked revolutionary shifts across industry sectors. But with AI’s potential comes a critical challenge: the environmental impact of the infrastructure required to power its growth. As AI models grow larger and more sophisticated, traditional data centers face sustainability limits, demanding fundamental changes to prevent an unsustainable environmental trade-off.
Trenton Thornock, CEO and founder of Prometheus Hyperscale, recognised this looming infrastructure challenge years before it became a mainstream concern. “Back in 2017, I saw clearly that AI infrastructure needed to radically change,” Thornock says. “Our mission from the beginning was to build infrastructure specifically for AI and high-performance computing (HPC) without compromising sustainability.”
The imperative of zero-water consumption
Water scarcity in the western United States was a catalyst for Prometheus Hyperscale’s unique approach. Traditional data centers are notoriously water-intensive, consuming vast amounts for cooling. Recognising the unsustainability of this model, Prometheus developed a data center design that is completely water-free, halving the overall power usage compared to conventional models.
“We entirely eliminated water consumption,” Thornock explains. “And we did it at a time when few were discussing AI’s explosive growth and its environmental impact. It wasn’t just good business; it was the only sustainable path forward.”
The environmental impact of AI infrastructure is further mitigated by Prometheus’s strategy to repurpose waste heat. Rather than dissipating heat, the company has explored innovative uses, including partnerships with advanced energy solutions like small modular reactors (SMRs). “We recognised early that large-scale solar isn’t viable everywhere,” Thornock points out. “Our collaboration with Oklo to deploy SMRs reflects a strategic shift to genuinely sustainable power solutions.”
Thornock adds context by noting the practical and ecological limitations of large-scale solar farms: “The sheer volume of land required for large-scale solar makes it impractical for many locations,” he continues. “Our approach optimizes land use and leverages new energy technologies effectively.”
Fit-for-purpose infrastructure
Thornock’s background in drilling engineering provided unique insights into the creation of fit-for-purpose AI infrastructure. “In drilling, equipment must survive extreme conditions – deep underground, with limited access and demanding reliability,” he says. “AI infrastructure similarly requires longevity, adaptability, and resilience.”
To meet these demands, Prometheus adopted an open-architecture approach, allowing infrastructure modification at the rack level, significantly extending asset lifespan. “Our racks can adapt to emerging technology without costly, large-scale renovations,” Th ornockadds. “This modular design is critical to futureproofing.”
The current trajectory of AI chip development underscores this necessity. “Back when I started in 2017, racks drew about 5-10 kilowatts,” Thornock recalls. “Today, a single rack might need over 1,000 watts per chip. Current infrastructure isn’t built for these demands.”
He argues that the solution lies in borrowing established practices from other sectors, particularly oil and gas. “Offshore oil platforms mastered liquid cooling, heat management, and high-density electrical distribution decades ago,” he says. “Adopting these proven methods in data centres is the only logical step.”
Decentralising AI infrastructure
An entrenched debate in AI circles revolves around centralisation versus decentralisation. For Thornock, the argument for decentralisation is compelling. “We need large-scale, edge-integrated solutions rather than traditional, centralised hyperscale data centres,” he argues. “Our approach is flexible, avoiding outdated practices like charging for physical connections, reminiscent of an old-fashioned telephone switchboard.”
Prometheus’s strategy involves partnering with companies like Lumen to ensure connectivity comparable to traditional data centre hubs. Despite this, some market resistance remains. “Being located in Wyoming caused hesitation among potential clients,” Thornock acknowledges. “But as AI’s power demands escalate, decentralisation will become less an option and more a necessity.”
Thornock further stresses that rigid infrastructure is inherently risky. “AI technology is evolving so rapidly that investing in inflexible data centre designs is unwise,” he says. “Companies must prioritise adaptability over tradition.”
Navigating towards sustainable AI
While sustainability claims are abundant, Thornock stresses the importance of tangible results. “Many businesses talk about green AI, but what does it really look like?” he asks. “Genuine green AI means radically reducing energy consumption, repurposing heat effectively, and embracing clean energy solutions like small modular reactors.”
Prometheus’s partnership with Oklo on small modular reactors is a key example. Oklo’s reactor technology is based on proven operational designs from Idaho National Labs. “This isn’t theoretical – it is an immediate path to large-scale carbon-neutral power,” Thornock adds.
By reducing reliance on extensive land-intensive solar installations, small modular reactors present an effective solution for sustainable energy. “Large-scale solar is not practical,” Thornock notes. “It’s a matter of land usage and resource allocation. SMRs bypass these limitations and can be deployed faster than many realise.”
AI’s next chapter: Inference over training
Looking towards 2030, Tuluie anticipates significant shifts in AI workloads. He expects the focus will move from training large models to inference, where efficiency and speed are paramount, and infrastructure must adapt accordingly. Enterprises will be forced by economic and environmental necessity to abandon inefficient practices. “Currently, organisations use resource-intensive AI models for trivial tasks,” Thornock says. “But costs—both environmental and economic—will inevitably demand a more strategic, thoughtful approach to AI deployment.”
He highlights another critical factor: legacy infrastructure constraints. “Retrofitting air-cooled data centres for liquid cooling is financially prohibitive,” Thornock points out. “Many organisations with billions tied up in outdated infrastructure are reluctant or unable to pivot quickly.”
Future vision: Adaptation and flexibility
Prometheus Hyperscale’s approach is rooted in adaptability. “We expect AI infrastructure to evolve rapidly and continuously until 2030,” Thornock explains. “Locking in rigid designs is no longer feasible. Instead, we focus on adaptability, sustainability, and genuine innovation. Inference workloads are less resource-intensive than training, but they require extremely rapid data processing. Our infrastructure is specifically optimised to handle these demands.”
Another crucial shift, according to Thornock, involves the industry’s approach to assessing infrastructure costs. The true cost of unsustainable infrastructure includes long-term environmental impacts and financial risks. Until businesses factor these into their decisions, sustainability will remain undervalued.
In a future where AI infrastructure faces increasing scrutiny over its environmental footprint, the flexibility of the Prometheus model stands out. “We anticipate multiple evolutions in AI infrastructure,” Thornock concludes. “Our strategy is not about predicting every detail of the future – it is about being ready to evolve alongside the industry’s demands.
“In this context, AI’s continued growth depends not merely on technological breakthroughs but on an industry-wide shift toward genuine sustainability. Sustainability and efficiency aren’t mutually exclusive; they’re complementary goals essential for the future of AI.”