
OpenAI told investors this week that its early push to dramatically increase computing resources gives it a key advantage over Anthropic PBC at a moment when its longtime rival is gaining ground and mulling a potential public offering.
The ChatGPT maker said it has outpaced Anthropic by "rapidly and consistently" adding computing capacity to support wider adoption of its software, according to a note the company sent to some of its investors after Anthropic announced a more powerful AI model called Mythos. The ambitious infrastructure build-out, criticized by some as too costly, has enabled OpenAI to better keep pace with rising demand for AI products, the memo states.
"That gap matters because compute is now a product constraint," OpenAI said in the note, which was viewed by Bloomberg News. OpenAI declined to comment.
Anthropic, founded by former OpenAI employees, has struggled at times in recent months to keep its services online amid a surge in demand fueled by new product offerings and public support over its standoff with the Pentagon. In its note to investors, OpenAI cites a report from analyst Ben Thompson, who authors the Stratechery blog, suggesting computing constraints may have impacted Anthropic's decision to limit the release of Mythos to select partners.
OpenAI said it had 1.9 gigawatts of computing capacity available in 2025, triple from the year prior. (A gigawatt is enough to power roughly 750,000 US homes.) The company expects that amount to grow to the "low-double-digit range" next year and reach roughly 30 gigawatts by 2030. By comparison, OpenAI estimates Anthropic ended 2025 with 1.4 gigawatts and will have between 7 and 8 gigawatts of capacity next year.
"Even at the high end of that range, our ramp is materially ahead and widening," OpenAI said in the memo.
Get the Tech Newsletter bundle.
Get the Tech Newsletter bundle.
Get the Tech Newsletter bundle.
Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature.
Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature.
Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature.
Plus Signed UpPlus Sign UpPlus Sign Up
By continuing, I agree to the Privacy Policy and Terms of Service.
In recent months, Anthropic has increased investments in physical infrastructure for its AI services, including committing to spend $50 billion to build data centers in the US. More recently, Anthropic expanded a strategic collaboration with Broadcom Inc. and Alphabet Inc.'s Google to access about 3.5 gigawatts of computing power beginning in 2027. And Anthropic has a diversified mix of suppliers, working with the three major cloud providers: Google, Microsoft Corp. and Amazon.com Inc.
In response to a request for comment, Anthropic directed Bloomberg News to an earlier statement from Chief Financial Officer Krishna Rao on the deal with Broadcom and Google.
"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure," Rao said. "We are making our most significant compute commitment to date to keep pace with this unprecedented growth."
In the past, Anthropic has generally taken a more conservative approach to spending than OpenAI, which plans to spend about $600 billion on data centers and chips by 2030. OpenAI recently raised $122 billion in funding to help pay for those commitments. Still, the company has faced skepticism over the pace of its outlays, given it does not expect to be profitable for years. On Thursday, OpenAI said it would pause a planned infrastructure project in the UK, citing energy costs.
"We as a company try to manage as responsibly as we can," Anthropic Chief Executive Officer Dario Amodei said in an interview late last year about his firm's spending plans. "And then I think there are some players who are YOLO-ing," he added, using an abbreviation that means "you only live once."
In the investor memo, OpenAI characterized Amodei as miscalculating the market's appetite for more AI products. "In hindsight," the company said, "that caution looks less like discipline and more like underestimating how fast demand would arrive."