
Anthropic has signed its largest compute agreement to date with Google and Broadcom, securing multiple gigawatts of next-generation tensor processing unit capacity to support its rapidly expanding artificial intelligence operations.
The capacity, expected to come online from 2027, will be delivered through Google Cloud infrastructure and powered by custom TPUs developed in partnership with Broadcom. The agreement includes access to roughly 3.5 gigawatts of compute, placing it among the largest AI infrastructure commitments disclosed by a model developer.
The deal marks a major escalation in Anthropic's infrastructure strategy as demand for its Claude models accelerates across enterprise customers.
Anthropic said its annualised revenue run rate has surpassed USD $30 billion in 2026, rising sharply from about USD $9 billion at the end of 2025. At the same time, the number of customers spending more than USD $1 million annually has doubled to over 1,000 within a matter of months.
The new agreement deepens Anthropic's existing relationships with both Google Cloud and Broadcom. It builds on an earlier TPU expansion announced in 2025 and aligns with the company's broader plan to invest USD $50 billion in strengthening computing infrastructure in the United States, where most of the new capacity will be located.
Infrastructure race
The scale of the deal reflects intensifying competition among AI developers to secure compute resources as model sizes and usage continue to grow.
Custom chips such as Google's TPUs have gained traction as an alternative to Nvidia's GPUs, which dominate the current AI training market and face pricing and supply constraints. Google has positioned its TPUs as a central part of its cloud growth strategy, using them to attract large AI customers and differentiate its infrastructure offerings.
Broadcom plays a key role in that ecosystem, working with Google to co-develop successive generations of TPUs and supply networking components for AI data centres. The companies have extended their chip partnership through at least 2031, providing long-term visibility into supply for hyperscale deployments.
For Anthropic, the arrangement secures access to a dedicated pipeline of high-performance compute without requiring the company to build its own semiconductor stack.
Multi-cloud strategy
Anthropic continues to run a diversified hardware strategy rather than relying on a single provider.
The company uses a mix of Amazon Web Services Trainium chips, Google TPUs and Nvidia GPUs to train and deploy its Claude models. Amazon remains its primary cloud provider and training partner, even as the Google and Broadcom relationship expands.
Claude is also distributed across multiple cloud platforms, including AWS, Google Cloud and Microsoft Azure, giving Anthropic broad reach across enterprise environments.
That approach allows workloads to be distributed across different architectures, balancing cost, performance and availability as demand fluctuates.
Enterprise demand
Anthropic said the new compute capacity is needed to support what it described as exponential growth in usage of its models.
"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development," said Krishna Rao, Chief Financial Officer, Anthropic.
"We are making our most significant compute commitment to date to keep pace with our unprecedented growth," added Rao.
The rapid increase in high-spending enterprise customers suggests that AI adoption is shifting from experimentation to production use, driving sustained demand for large-scale compute resources.
At the same time, the reliance on partners such as Google and Broadcom highlights how the AI industry is evolving into a tightly linked ecosystem of model developers, cloud providers and chip designers.
As competition intensifies, securing long-term access to compute at scale has become a defining factor in the race to build and deploy frontier AI systems.