Meta has agreed to bring tens of millions of custom processor cores from Amazon Web Services into its infrastructure, signalling a deeper shift in how large-scale artificial intelligence systems are being built and powered.
The move centres on the deployment of AWS Graviton processors, a family of Arm-based chips designed by Amazon, which will now form a significant part of Meta’s compute portfolio. The agreement positions Meta among the largest users of Graviton cores globally and reflects a broader recalibration of infrastructure priorities as AI systems evolve beyond model training into continuous, autonomous operation.
Compute becomes the constraint
The expansion is tied directly to Meta’s investment in so-called agentic AI systems, applications that are designed to reason, plan and execute tasks independently. Unlike earlier generations of AI, which focused primarily on training large models, these systems demand sustained and distributed processing across billions of interactions.
Processing cores, the fundamental units within CPUs, are increasingly central to this shift. As workloads move towards constant inference and decision-making, the emphasis is moving away from peak compute performance towards efficiency, bandwidth and the ability to manage continuous execution at scale.
AWS said its latest Graviton5 cores are designed to deliver faster data processing and greater bandwidth, characteristics that are becoming critical as AI systems operate in real time rather than in discrete training cycles. In that context, the partnership reflects a growing recognition that the infrastructure challenge is no longer limited to model capability, but extends to how data is moved, processed and acted upon across distributed systems.
Nafea Bshara, Vice President and Distinguished Engineer at Amazon, framed the agreement as part of a wider shift towards integrated AI infrastructure. He said that combining purpose-built silicon with cloud-based data and inference services creates the foundation required to support AI systems operating at global scale.
Diversification over optimisation
For Meta, the decision to incorporate Graviton into its infrastructure strategy underlines a deliberate move away from reliance on any single compute architecture. The company already invests heavily in its own data centres and custom hardware, but the addition of AWS silicon introduces a more diversified model aligned to specific workload requirements.
Santosh Janardhan, Head of Infrastructure at Meta, described diversification as a strategic imperative as the company scales its AI ambitions. He pointed to the need to match different types of workloads with the most appropriate compute resources, particularly as agentic AI increases demand for CPU-intensive processing alongside more traditional GPU-driven tasks.
The initial deployment will involve tens of millions of cores, with scope for further expansion as Meta’s AI systems grow. While the announcement does not specify financial terms or timelines beyond the first phase, the scale of the deployment indicates a long-term alignment between the two companies around infrastructure development.
The implications extend beyond the partnership itself. As AI systems become more autonomous and persistent, the industry is being forced to reconsider the balance between general-purpose compute, custom silicon and cloud-based architectures. The result is a more fragmented but potentially more resilient infrastructure landscape, where performance is defined less by raw capability and more by how effectively different components are orchestrated.
In that sense, the Meta and AWS agreement reflects a broader transition. Artificial intelligence is no longer simply a software challenge. It is increasingly an infrastructure problem, shaped by the physical realities of compute, data movement and system design.


