
On April 20, Amazon announced it was investing up to another $25 billion in Anthropic, bringing its total commitment to $33 billion. In exchange, Anthropic agreed to spend more than $100 billion on AWS infrastructure over the next decade and secured up to 5 gigawatts of compute capacity spanning Amazon's Graviton CPUs and Trainium2 through Trainium4 AI chips.
Five gigawatts is roughly the power draw of a mid-sized city. It is also more compute than any single AI company has ever locked in through one contract. And here is the part that matters: most of it does not exist yet.
TSMC has not fabricated the chips. The power plants that will feed those data centers are still being built. The cooling systems, the networking fabric, the physical buildings themselves, most of this is capacity that Amazon is committing to construct over the next several years.
Amazon did not pay $33 billion for a stake in Anthropic. Amazon paid $33 billion to guarantee that when the compute comes online, Anthropic runs on it.
That is a structurally different kind of deal from anything the tech industry has seen at this scale, and the reason it is happening is the single most important fact in AI right now: compute is supply-constrained, and supply does not move at the speed of demand.
For most of the last decade, cloud economics worked like this. You had workloads. You rented compute from a hyperscaler. The hyperscaler had plenty of capacity because demand grew incrementally and supply could grow with it. Compute was abundant. It was a commodity.
AI broke that model.
The demand curve for AI compute is not incremental. It is vertical. Anthropic's annualized revenue run rate hit $30 billion in April 2026, up from $9 billion at the end of 2025. A tripling in four months. Over 1,000 enterprise customers now spend more than $1 million a year on Claude, up from 500 in February. Eight of the Fortune 10 are customers. And Anthropic is not even the largest AI player. Across OpenAI, Anthropic, Google, xAI, and the rest, the aggregate demand for training and inference compute is growing at a rate that the physical world cannot keep up with.
The supply side has four constraints, and each one takes years to move:
When demand doubles every few months and supply takes three to five years to meet it, the economics change completely. Compute stops being a commodity. It becomes a scarce resource, and scarce resources get locked in by whoever has the capital and the credibility to commit early.
That is the frame for understanding Amazon's $33 billion. Amazon is not funding Anthropic. Amazon is pre-committing capacity it plans to build, to a customer whose demand it already knows can absorb it. The money is moving now because the capacity will not exist later unless someone agrees to pay for it before it is built.
The second part of the story, and the part most commentary gets wrong, is about what Anthropic is doing across the industry.
Anthropic itself frames this as core strategy, describing Claude as "the only frontier AI model available to customers on all three of the world's largest cloud platforms: AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry)." That is not a hedge. It is a distribution strategy. At this point, Anthropic has committed infrastructure relationships with all three major hyperscalers:
It would be easy to read this as Anthropic playing hyperscalers against each other. That framing is wrong. The hyperscalers are not rivals here. They are separate ecosystems, and each one has a customer base that Anthropic cannot access any other way.
A Fortune 500 company that runs its infrastructure on AWS is not going to switch to Azure just to use Claude. Its security reviews, its compliance audits, its data residency requirements, its identity layer, its existing deployment pipelines, all of it is built around AWS. Same for a Microsoft shop. Same for a Google Cloud customer. The cloud provider is not a vendor you swap. It is a platform you build on top of.
So when Anthropic signs a deal with AWS, it is not renting compute. It is buying distribution into the thousands of enterprises that live inside the AWS ecosystem and cannot leave.
When it signs with Google, it is buying distribution into Google Cloud's enterprise base. When it signs with Microsoft, it gets access to the Microsoft 365 customer base, which is measured in hundreds of millions of seats.
Each hyperscaler has a compute roadmap, a customer base, and a set of enterprise lock-ins that Anthropic cannot build independently. So it is plugging into all three. Not because it is hedging. Because each one reaches a different half of the enterprise market, and Anthropic's revenue curve only keeps growing if it is available wherever its customers already are.
The obvious comparison is OpenAI, and it is worth being specific about where the comparison breaks.
OpenAI has a similar set of arrangements: roughly $110 billion in recent funding, including $50 billion from Amazon, $30 billion from Nvidia, and $30 billion from SoftBank, and a $300 billion cloud infrastructure deal with Oracle. On the surface, it looks like the same structure as Anthropic's. Investors wire money, the AI company spends it on infrastructure, the infrastructure providers book the spend as revenue.
But the OpenAI and Anthropic structures diverge in one critical way. Oracle has to build most of the capacity OpenAI needs, and Oracle does not have the customer ecosystem that a hyperscaler has. If OpenAI's enterprise revenue curve bends the wrong way, Oracle is stuck with data center capacity it cannot easily sell to anyone else. Oracle is betting on OpenAI specifically, not on the AI compute market generally.
Amazon, Google, and Microsoft are in a different position. Their data centers serve tens of thousands of enterprise workloads across hundreds of industries. If AI demand from Anthropic plateaus, the same compute gets redeployed to other customers. The capacity is fungible because the ecosystem is fungible. The hyperscalers are not betting on Anthropic. They are betting on AI compute demand broadly, and using Anthropic as the anchor tenant to justify the build.
That is a much more durable structure. It also explains why Anthropic can get three hyperscalers to fund its growth simultaneously: each one is getting a flagship AI workload to justify the capacity expansion its other customers are also pulling on.
Three things worth taking from this, that the standard "circular financing" read misses entirely:
The frame most commentators are using, that AI is a financial bubble with money going in circles, gets the physics wrong. Money is not going in circles. Capacity is being reserved, years in advance, because the underlying supply curve cannot keep up with demand and everyone in the industry knows it.
Amazon paid $33 billion for something that does not exist yet. That is not a bubble signal. It is a signal about how constrained the thing it is paying for actually is, and how long it takes to build.
Watch the capacity reports. Watch the gigawatt numbers. Watch how fast hyperscalers can actually bring new data centers online. That is where the AI industry is being decided now. Not in the chip benchmarks, not in the model leaderboards, and definitely not in the quarterly earnings calls. In the ground, where the buildings are being built, and in the queue at TSMC, where everyone eventually has to wait their turn.