
The frontier model maker is looking to greatly expand its compute infrastructure to meet surging customer demand
Anthropic has announced a major expansion of its partnership with Google and Broadcom, which will see it rely more heavily on Google's tensor processing unit (TPU) chips to meet computational demand.
The frontier lab announced it has signed an agreement to use "multiple gigawatts of next-generation TPU capacity", which would come online from 2027 onwards.
It's unclear whether the specific TPU models deployed as part of the deal will be TPU v7 Ironwood, announced at Google Cloud Next 2025, or an unspecified future model. ITPro contacted Anthropic for clarification but did not receive a reply at time of publication.
The deal will see Anthropic make use of a further 3.5 GW of TPU compute capacity, per CNBC reporting. In its announcement, the firm explained the "vast majority" of this would be physically sited in the US.
Anthropic uses a mix of TPUs, Amazon Trainium chips, and Nvidia GPUs to fulfil its training and inference requirements. Amazon is still its primary training partner and cloud provider, with the firms having worked together on the million-chip Project Rainier cluster throughout 2025.
In October 2025, it had announced plans to expand its TPU footprint by up to one million chips throughout 2026, the equivalent of over a gigawatt of compute capacity.
Google says its TPUs, which Broadcom co-designs and Taiwan Semiconductor Manufacturing Company (TSMC) manufactures, offer highly competitive teraflops per watt compared to competing chips.
Unlike generalist GPUs, TPUs are application-specific integrated circuits (ASICs) that were designed specifically with machine learning and AI in mind.
"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development," said Krishna Rao, CFO of Anthropic.
"We are making our most significant compute commitment to date to keep pace with our unprecedented growth."
Like its competitor OpenAI, Anthropic is looking to rapidly expand its compute capacity to meet rising customer inference demand driven by its popular tools such as Claude Code, Claude Cowork, and agentic tool use within cloud platforms like AWS Bedrock and Google Vertex AI.
In recent weeks, Anthropic announced new session limits for free, Pro, and Max subs for all weekday Claude use from 1pm to 7pm GMT. The move prompted criticism from users on platforms such as Reddit, with some reporting hitting rate limits within 30-35 minutes of vibe coding with Claude Code.
On 4 April, Anthropic removed allowances for third-party harnesses from the standard Claude subscription, effectively locking popular tools like OpenClaw behind a paywall.
In the same announcement as the new compute deal, Anthropic announced its run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025. The firm added that over 1,000 business customers now spend more than $1 million on Anthropic annually.
Per reporting from Forbes, Anthropic reports its revenue as a gross figure, inclusive of revenue accrued from hyperscaler customers accessing its models.
This results in a higher top-line revenue figure, as it doesn't include the cut Google Cloud and AWS take each time a user pays for Claude access via their platform.
ITPro asked for clarification on the precise way in which Anthropic calculates its revenue and if the $30 billion figure excludes hyperscaler revenue sharing, but did not receive a reply by the time of publication.