Enterprises are discovering that the cloud architectures that powered the last decade of digital transformation are not automatically suited to the next decade of artificial intelligence. As AI workloads fracture across training, inference, sovereignty and cost constraints, the question is no longer where AI runs, but how many platforms organisations can realistically hold together.
The early cloud era rewarded standardisation. Workloads were predictable, elasticity mattered more than locality, and data volumes, while growing, rarely dictated architectural decisions. Hyperscale platforms were designed around those assumptions, optimised for transactional systems, web applications and broad horizontal scale. Artificial intelligence is now breaking every one of those assumptions at once.
Training workloads demand extreme compute density and throughput, while inference prioritises proximity to applications, users and data. Regulation governs where data can live, how it is processed, and who can access it. Cost sensitivity varies greatly between experimentation and production. These factors fragment requirements that few legacy cloud models were designed to absorb.
According to Bill Unsworth, Director of IBM Cloud for the UK and Ireland, this fragmentation is not a temporary disruption but a structural change in how enterprise IT must be designed.
“Most hyperscale clouds were built to serve web applications and transactional workloads,” Unsworth says. “They are extremely good at scale and elasticity, but AI introduces very different patterns. You suddenly have workloads that are compute-intensive, data-heavy, highly regulated and often unpredictable, all at the same time. That combination exposes the limits of architectures that were never designed with AI in mind.”
Hybrid is no longer a compromise
The emergence of specialist AI clouds has sharpened this tension. Providers built explicitly for GPU-intensive workloads are positioning themselves as alternatives to hyperscalers for training and large-scale inference. At the same time, enterprises remain deeply invested in existing cloud platforms, private infrastructure and on-premises estates that cannot simply be abandoned.
IBM sees this not as a battle but as a shift where no environment can optimize every workload. ‘Hybrid by design’ starts from business outcomes, placing workloads to suit performance, cost, and governance.
Uncoordinated fragmentation quickly becomes unmanageable as teams select optimal platforms, only to find later that moving across environments introduces unexpected costs and operational risk. Enterprises want platforms to behave as a single system.
The reality of multi-cloud AI
For many AI leaders, multi-cloud has become a necessity rather than a strategy. Training may run on a specialist GPU cloud, while inference may run on a hyperscaler close to the applications, with sensitive data remaining on-premises or in a sovereign environment. Each choice may be rational in isolation, but together they create a level of complexity that is easy to underestimate.
“Data gravity becomes the defining force very quickly,” Unsworth says. “If your training data lives in one environment, the compute wants to move there, not the other way around. Moving hundreds of gigabytes between clouds is expensive, slow and fragile. Architectures must respect where data naturally wants to sit.”
IBM focuses on orchestration, not consolidation, by providing a consistent control plane across environments. The goal is to make fragmentation workable through common identity, governance, deployment, and data fabric.
Where integration really breaks down
The idea of AI manufacturing suggests production-line efficiency, yet the lived experience for most enterprises is still far messier. Tools proliferate faster than integration strategies, and complexity accumulates quietly until it becomes a constraint.
Successful integration, Unsworth argues, is less about connecting tools and more about establishing coherence across the stack. “In practice, integration means that developers can deploy AI workloads using the same APIs and pipelines wherever those workloads run,” he says. “Whether it is IBM Cloud, on-premises infrastructure or a partner environment, the operational experience should be consistent.”
Platforms such as Red Hat OpenShift play a critical role in enabling that consistency, abstracting infrastructure differences and supporting portability. However, Unsworth is careful not to present this as a universal fix. “Portability does not remove all complexity,” he says. “GPU availability, networking performance and storage characteristics are still environment specific. OpenShift gives you a common operational layer, but architectural discipline is still essential.”
Where most enterprises misjudge the challenge is not in data pipelines or model deployment, but in security and governance. “Security and governance are often treated as something you add later,” Unsworth says. “In AI systems, that approach fails very quickly. Identity, access control, auditability and policy enforcement touch every stage of the lifecycle. Retrofitting them after the fact is far harder than designing them in from the start.”
Ecosystems rather than vendors
Modern AI infrastructure stems from multiple ecosystems. IBM acts as an integrator, giving customers freedom to choose components and providers while making those choices viable.
That pragmatism extends to IBM’s relationship with specialised AI clouds. “Sometimes we are partners, sometimes competitors, and often both,” he says. “The key is being honest about the trade-offs. For some workloads, a specialised provider offers compelling economics. For others, the integration overhead outweighs the benefit.”
Claims of dramatic cost savings, Unsworth suggests, rarely survive closer scrutiny. “When people say a platform is fifty per cent cheaper, the first question should always be cheaper for what,” he says. “Once you factor in data movement, integration effort and operational complexity, total cost of ownership looks very different.”
Designing for an unsettled future
Unsworth expects platform fragmentation to grow, with hyperscalers, specialists, and sovereign infrastructure each expanding. Workloads will increasingly find their optimal environment.
IBM’s strategy is to embrace that reality by doubling down on hybrid flexibility, ecosystem integration and governance as first-class design principles. “For CIOs, the most important decision today is not which platform to back,” Unsworth says. “It is how to design an architecture that can adapt as platforms change.”
That means separating training from inference, abstracting models from hardware dependencies and treating governance as foundational rather than optional. “If you build for modularity from day one, you preserve choice,” he says. “That is the only practical way to future-proof AI infrastructure in a market that is still finding its shape.”
In an environment defined by competing clouds and accelerating complexity, the emerging lesson from enterprise AI deployments is clear: Integration, not raw compute, is becoming the decisive capability, tying together all previous themes.




