As AI adoption accelerates, data centres must rapidly adapt to support the immense computational power and connectivity demands of modern AI workloads. Mark Venables spoke to Tiffany Osias, Managing Director, xScale, Equinix to see how AI is shaping the infrastructure of the future, and how the company are ensuring businesses can scale, optimise, and sustain AI-driven innovation with flexibility and efficiency.
The meteoric rise of AI applications is reshaping the technological landscape, presenting both opportunities and challenges for data centre operators. As businesses increasingly rely on AI to drive innovation, optimise processes, and enhance customer experiences, the demands placed on data centres are evolving at an unprecedented pace. AI workloads, particularly those involving large language models and complex machine learning algorithms, require vast computational power, low-latency connectivity, and robust data-handling capabilities. This shift places immense pressure on data centre operators to deliver infrastructure that is not only high-performing but also scalable and sustainable.
At the heart of these challenges lies the need to support diverse technologies, from GPU-intensive training workloads to CPU-driven inferencing tasks. Data centre operators must balance the rapid pace of hardware innovation with the necessity of future-proofing their facilities, adopting advanced cooling solutions, higher power densities, and energy-efficient designs. Furthermore, the integration of hybrid environments, blending cloud and on-premises systems, complicates connectivity requirements, necessitating seamless and secure interconnectivity.
Meeting AI’s growing challenge
Equinix has established itself as a key enabler of the infrastructure necessary for AI applications, which require immense computational power and seamless data flow. With the rapid evolution of AI technologies, there is significant discussion around the optimal hardware for these applications. “GPUs have become a focal point in public discourse, largely due to their association with high-performance AI workloads,” Tiffany Osias, Managing Director, xScale, Equinix, says. “However, many organisations are successfully running AI on CPUs, leveraging years of experience with machine learning on these platforms.
“The choice of technology depends on the specific use case, with some companies experimenting with application-specific integrated circuits (ASICs) and tensor processing units (TPUs) for highly specialised tasks. The diversity of these options underscores the importance of flexibility in infrastructure design, allowing companies to adapt to their unique requirements both now and in the future.”
Equinix caters to a wide range of customers, from hyperscalers deploying advanced large language models to enterprises with varied infrastructure needs. Hyperscalers often rely on GPU-intensive architectures, which come with specific demands for high power density and advanced cooling. “We have responded by developing data centres tailored to these requirements, incorporating solutions such as direct-to-chip liquid cooling for maximum efficiency,” Osias adds. “At the same time, enterprises with mixed architectures benefit from our facilities that support a broad spectrum of technologies, from CPUs to GPUs. This adaptability ensures that customers have access to infrastructure capable of meeting their performance, density, and cooling needs, regardless of their specific workloads.”
Keeping pace with the rapid advancements in hardware is a significant challenge for both enterprises and service providers. “The lifecycle of GPUs and other specialised hardware is shortening, with companies often finding that the equipment they invested in just a year ago is already outdated,” Osias continues. “This challenge is particularly acute for GPU-as-a-service providers, who must continually upgrade their offerings to remain competitive.
“We address this issue by maintaining close partnerships with silicon providers and OEMs. Years of co-innovation with companies such as Nvidia have allowed us to anticipate market demands and design data centres that are ready to accommodate new technologies as soon as they become available. This proactive approach ensures that customers can deploy cutting-edge infrastructure with confidence, knowing that their needs have been considered from the outset.”
The demands of large language models
The infrastructure requirements for large language models (LLMs) are among the most demanding in the AI landscape. Training these models involves processing vast amounts of data, requiring significant computational power and storage. However, the demands change as the model matures. Once trained, LLMs are fine-tuned to specific datasets, a process that requires less data and computational intensity. The final phase, inferencing, involves deploying the model to make predictions or generate outputs, typically with minimal resource requirements compared to the training phase.
“This progression highlights the importance of scalable and adaptable infrastructure,” Osias says. “Hyperscalers are at the forefront of LLM training, while most enterprises focus on tuning and inferencing. By deploying tuned models at the edge, organisations can reduce latency and ensure optimal performance for end-users. We play a critical role in supporting this lifecycle, providing the connectivity and computational resources needed at every stage.”
To meet these diverse demands, hybrid infrastructure has become the default choice for enterprises, enabling them to combine the scalability of cloud services with the control and reliability of on-premises systems. “Equinix facilitates this integration through our extensive global network, which hosts over 40 per cent of the world’s cloud on-ramps,” Osias explains. “These connections allow enterprises to access cloud services securely and privately, bypassing the public internet to ensure greater reliability and performance. The hybrid model also supports a wide range of use cases, from cloud-based proof-of-concept projects to in-house deployments of proprietary applications. Many organisations choose to house their private infrastructure in our data centres, taking advantage of their global reach, advanced security measures, and interconnectivity with other networks and service providers.”
Where should data live?
Scalability is a multifaceted challenge for enterprises implementing AI. Deciding where to locate data is a critical first step, influenced by factors such as regulatory requirements, data residency laws, and the need for proximity to end-users. Equinix addresses these challenges by offering infrastructure in strategic locations worldwide, ensuring that organisations can meet their current needs while preparing for future growth.
“Scalability also involves accommodating diverse workloads, from high-density GPU deployments to traditional CPU-based systems,” Osias says. “Equinix data centres are designed to support these varied requirements, incorporating advanced cooling solutions and renewable energy initiatives to ensure sustainable growth. Our commitment to sustainability is evident in our goal to achieve 100 per cent renewable energy across our global platform by 2030, a target we are already close to reaching with 96 per cent global coverage.”
The hybrid infrastructure model reflects broader trends in enterprise AI adoption. Many organisations begin their AI journey with proof-of-concept projects in the cloud, where they can experiment with minimal upfront investment. Table-stakes workloads, such as workforce productivity tools, are also typically hosted in the cloud due to their reliance on SaaS solutions. “As organisations seek to differentiate themselves through industry-specific applications, they often turn to in-house infrastructure,” Osias adds. “These proprietary systems allow companies to maintain greater control over sensitive data and optimise their AI models for unique use cases. We support this transition by providing secure, scalable, and interconnected facilities that meet the needs of diverse industries.”
AI as a service is another trend that is rapidly gaining traction, driven by the need for flexible and accessible solutions that can scale to meet demand. Service providers offering AI as a service face unique challenges, including the need to build an infrastructure capable of supporting millions of users. “This requires significant investments in computing, storage, and networking resources,” Osias continues. “Equinix can offer the high-performance data centres and interconnectivity needed to deliver reliable and scalable services. The rise of AI as a service underscores the importance of infrastructure flexibility as organisations seek solutions that can adapt to changing requirements.
Taking a future view
“Looking ahead, the infrastructure landscape for AI is set to evolve significantly. As companies invest in training infrastructure to meet immediate needs, they must also consider how to repurpose these resources for inferencing and other applications in the future. The pace of hardware innovation is accelerating, requiring organisations to adopt strategies that maximise the value of their investments. Our position at the intersection of multiple technology domains enables us to anticipate these trends and provide facilities that meet emerging requirements. For example, the shift from air cooling to liquid cooling is already being supported in Equinix data centres, ensuring that customers are prepared for the next generation of high-performance computing.”
For executives planning their IT infrastructure to support AI-driven innovation, several key considerations stand out. Data aggregation is critical, as organisations must determine where their data resides and how it will be used to train and refine AI models. Compliance and security are equally important, particularly in industries with stringent regulatory requirements. Finally, organisations must build robust partner ecosystems to ensure the success of their AI initiatives.
“While AI dominates discussions around technology and innovation, it is just one part of a broader landscape,” Osias concludes. “Organisations are also focused on modernising their networks, improving customer experiences, and adopting new applications to drive growth. We remain committed to supporting these diverse needs, providing the infrastructure and expertise necessary to thrive in an increasingly interconnected world.”