
Anthropic has expanded its partnership with Google and Broadcom to secure multiple gigawatts of compute, positioning the startup to scale training and inference capacity as demand rises.
A related filing describes Broadcom agreeing to produce future versions of Google's TPUs and also expanding the Anthropic deal to provide access to roughly 3.5 gigawatts of computing capacity. Anthropic also says its business momentum has accelerated, with reported run-rate revenue crossing $30B, up from about $9B at the end of 2025.
This is a major datapoint in the ongoing race to lock in scarce AI infrastructure -- especially large-scale accelerator capacity. Compute availability isn't just a pricing or procurement issue; it affects training timelines, the ability to serve more users, and the cost structure of inference at production scale.
The deal also signals how AI supply chains are being organized around TPU ecosystems. Instead of treating accelerators as interchangeable parts, the partnerships are structured around who builds future TPU generations and how much capacity a given AI company can reliably access.
The scale language -- multiple gigawatts -- underscores that the bottleneck increasingly sits in datacenter power and end-to-end infrastructure, not just chips themselves. Securing long-term capacity can translate into faster product iteration and more consistent service quality.
Finally, Anthropic's revenue run-rate claim suggests the company is converting that infrastructure into commercial outcomes. If accurate, it supports the broader market narrative that leading model companies are moving from "capacity-limited growth" toward "demand-limited growth," making compute contracts even more strategically important.