
Key Caveat: A consumption clause ties full use of the 3.5-gigawatt capacity to Anthropic's continued commercial success.
Anthropic has secured 3.5 gigawatts of Google TPU capacity in a deal with Broadcom, tripling the compute from an October 2025 predecessor agreement as the company's revenue run rate surges past $30 billion.
Google's Tensor Processing Units (TPU), custom-designed AI accelerators (ASICs) developed by Google to accelerate machine learning workloads, are specifically optimized for high-volume neural network training and inference.
The deal, disclosed through a Broadcom SEC filing rather than Anthropic's own announcement, will deliver the new capacity starting in 2027. Anthropic's revenue run rate has exceeded $30 billion, up from $9 billion at the end of 2025, representing a more than threefold increase in roughly four months. The company's enterprise customer base of businesses spending more than $1 million annually has simultaneously doubled from 500 to more than 1,000 in roughly five weeks.
The new agreement builds on the October 2025 deal that granted Anthropic access to one million Google TPUs and more than 1 gigawatt of compute. That predecessor deal was valued at tens of billions of dollars and marked the beginning of a deepening relationship between Anthropic and Google Cloud. At 3.5 times the capacity, this new commitment reflects how decisively the revenue trajectory has shifted since that first agreement.
The expanded partnership for multiple gigawatts of next-generation compute will come online starting in 2027 under a multiyear deal running until 2031, according to the SEC filing. The majority of the new compute will be housed in the United States, extending Anthropic's $50 billion commitment to U.S. compute infrastructure that includes its own AI data centers in Texas and New York. Broadcom and Google are also in discussions with other operational and financial partners in connection with the deployment.
Krishna Rao, Anthropic's CFO, called it "our most significant compute commitment to date" to keep pace with demand.
Anthropic's announcement post omitted the 3.5-gigawatt figure entirely. The specific capacity was revealed only through Broadcom's regulatory filing, which also disclosed the 2031 timeline and a consumption clause specifying that full capacity deployment is contingent on Anthropic's continued commercial success. That selective disclosure suggests Anthropic chose to foreground its revenue milestones over the infrastructure commitments that underpin them.
Anthropic runs Claude across AWS Trainium, Google TPUs, and Nvidia GPUs, with Amazon Web Services named as its primary cloud and training partner. The 3.5-gigawatt TPU commitment is one component of a broader multi-cloud strategy that distributes risk across providers and avoids dependence on any single chip architecture.
The October 2025 predecessor deal established the baseline of more than 1 gigawatt from one million TPUs, with the compute expected to come online for Anthropic in 2026. The expansion to 3.5 gigawatts in under six months underscores how rapidly demand for Claude has outstripped initial projections.
Anthropic's separate commitment to build its own AI data centers in Texas and New York, announced in November 2025, signals the Google Cloud deal is part of a larger infrastructure push rather than a one-off procurement. The revenue surge from $9 billion to $30 billion since October provides the financial rationale for scaling this aggressively and underscores how sharply competitive dynamics in enterprise AI have accelerated since late 2025.
The compute expansion comes as Anthropic's financial trajectory accelerates. Anthropic company closed a $30 billion Series G funding round in February 2026 that valued it at $380 billion, and its revenue run rate has now tripled since the prior Google deal was signed.
Enterprise customers spending $1 million or more annually grew from roughly 500 to more than 1,000 in about five weeks, validating CEO Dario Amodei's assertion that enterprises account for 80% of Anthropic's business. That enterprise concentration has also shielded Anthropic from the volatile consumer-app dynamics that have buffeted other AI labs.
On a March earnings call, Broadcom CEO Hock Tan said the company was "off to a very good start in 2026" with Anthropic, confirming 1 gigawatt of compute was already being delivered and that demand was expected to surge beyond 3 gigawatts in 2027. The expanded deal now formalizes that demand at 3.5 GW, turning Tan's forward-looking statement into a contractual commitment weeks later.
Broadcom shares rose more than 6% on April 7 following the announcement, their second-strongest day of the year. The stock had fallen almost 10% in prior sessions due to concerns about AI buildout costs coupled with soaring energy prices linked to the conflict in Iran.
Mizuho analysts estimated Broadcom would collect $21 billion in AI revenue from Anthropic alone in 2026, rising to $42 billion in 2027. That represents a sharp upward revision from Broadcom's own Q1 FY2026 AI semiconductor revenue guidance of roughly $8.2 billion just two months earlier. Broadcom reported overall Q1 FY2026 revenue of $18 billion, up 28% year-over-year, with AI semiconductor revenue surging 74%, providing a strong baseline that makes the scale of the Mizuho estimates all the more striking.
"We already saw upside to medium-term revenue and profit expectations off the back of recent results; these new deals help underpin that idea if deployment ramps as planned."
Building on that outlook, Citi analysts maintained their buy rating for Broadcom, projecting it could surpass its $100 billion revenue target and reach more than $130 billion driven by the Google and Anthropic deals. Mizuho also maintained its buy recommendation, noting that the tighter TPU partnership strengthens Broadcom's competitive position against other chipmakers.
The deal's scale is all the more striking given the regulatory headwinds Anthropic faces. The U.S. Defense Department labeled Anthropic a supply-chain risk following a standoff over its safety guardrails, a designation that prompted more than 100 businesses to contact the company expressing doubt over their ability to continue working with it.
Anthropic won an injunction against the Trump administration over the designation. Rather than dampening demand, the public dispute appeared to boost it: the Claude app briefly became the top free U.S. app in Apple's App Store in February. Anthropic is also capturing more than 73% of all spending among companies buying AI tools for the first time, while OpenAI has dropped to around 27%, a reversal that underlines how quickly market share has shifted in enterprise AI.
Anthropic's 3.5-gigawatt commitment is smaller than OpenAI's 30-plus gigawatts of total compute pledged across Nvidia, Broadcom, Oracle, and AMD partnerships, but its growth from 1 gigawatt to 3.5 gigawatts in under six months signals the gap may be narrowing faster than raw numbers suggest. Broadcom is simultaneously developing custom AI chips for OpenAI, giving it unusual visibility into demand trajectories across competing AI labs.
Whether Anthropic can sustain the commercial momentum needed to consume 3.5 gigawatts remains the central question. Broadcom's SEC filing includes a consumption clause noting that capacity usage is "dependent on Anthropic's continued commercial success." With revenue tripling in four months and enterprise customers doubling in five weeks, the trajectory currently supports the bet, but the fine print makes clear that Broadcom is hedging on Anthropic's future growth, not simply endorsing its recent past.