
Anthropic secures massive TPU capacity from Google and Broadcom to power Claude as enterprise demand doubles and AI infrastructure becomes the next battleground.
The AI infrastructure race is entering a new phase, and Anthropic is making its biggest bet yet. The company has signed a long-term agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU compute capacity, expected to go live starting in 2027.
At a time when computing has become the defining bottleneck in AI, this move signals a shift from experimentation to industrial-scale deployment.
Anthropic's latest deal is less about future ambition and more about catching up with present demand. The company revealed its run-rate revenue has crossed $30 billion in 2026, a sharp jump from $9 billion just months ago.
Equally telling is enterprise traction. Over 1,000 customers are now spending more than $1 million annually on Claude, doubling in under two months. This kind of acceleration is rare, even by AI standards, and underscores why securing compute at scale is now mission-critical.
Krishna Rao, CFO, Anthropic, framed the move as a necessity rather than a strategic luxury: "We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development."
The partnership hinges on Google's Tensor Processing Units (TPUs), purpose-built chips designed for large-scale AI workloads. Through its collaboration with Broadcom, Google will supply future generations of these chips, while also enabling Anthropic access to roughly 3.5 gigawatts of TPU-based compute.
This is not a single-vendor bet. Anthropic continues to run a multi-hardware strategy, leveraging AWS Trainium, NVIDIA GPUs, and Google TPUs. The approach allows workload optimisation across architectures, improving both performance and resilience.
In practical terms, this means enterprise customers using Claude across platforms, from Amazon Bedrock to Google Vertex AI and Microsoft Azure, can expect more consistent performance under heavy demand.
What stands out is the scale. Multi-gigawatt compute commitments place AI infrastructure closer to energy and telecom-level planning than traditional cloud provisioning.
Most of this new capacity will be based in the United States, aligning with Anthropic's earlier $50 billion commitment to domestic compute infrastructure. It also reflects a broader geopolitical push to localise critical AI supply chains.
At the same time, the deal deepens Anthropic's ties with Google Cloud while maintaining Amazon as its primary training partner, highlighting a deliberate multi-cloud posture rather than platform dependency.
The announcement comes amid growing regulatory and geopolitical scrutiny. Anthropic has already pushed back against its classification as a "supply chain risk" by the US administration, while continuing to assert limits on how its AI can be deployed, particularly in surveillance and autonomous weapons contexts.
This creates a complex backdrop: rapid commercial scaling on one side and tightening oversight on the other. Anthropic's move is less an isolated deal and more a signal of where the AI economy is heading. Compute is no longer just an enabler; it is the product backbone.
As foundation model companies compete not just on capability but on availability and latency, infrastructure partnerships like this will increasingly define market leaders. The question now is not whether demand will keep rising, but whether even multi-gigawatt bets will be enough to keep pace.