
Anthropic, the San Francisco AI company behind the Claude chatbot, is quietly investigating whether it should design its own artificial intelligence chips.
Key Takeaways:
Anthropic is in early-stage discussions about designing proprietary AI chips, though it has not committed to any architecture or formed a dedicated team.
The company's annualized revenue has tripled from roughly $9 billion in late 2025 to over $30 billion in 2026, fueling demand for more compute.
Designing a cutting-edge AI chip costs approximately $500 million, covering specialized engineering talent and defect-free manufacturing.
Three people with knowledge of the discussions confirmed the exploration, which places Anthropic alongside Meta and OpenAI in a widening push among AI companies to control their own hardware destiny. The effort remains early-stage, with no dedicated team assembled and no chip architecture locked in. Anthropic may ultimately choose to keep purchasing chips from existing suppliers.
Right now, Anthropic relies on a mix of processors sourced from major partners. Google's tensor processing units (TPUs) and Amazon's custom chips handle the heavy lifting -- training and running the AI models that power Claude. Earlier this week, Anthropic deepened that hardware relationship by signing a long-term agreement with Google and chip design firm Broadcom, a deal tied to Anthropic's broader pledge to pour $50 billion into U.S. computing infrastructure.
Still, buying someone else's chips and building your own are not mutually exclusive strategies. The company's explosive growth makes the economics of custom silicon increasingly attractive. Claude usage has surged throughout 2026, and Anthropic disclosed this week that its run-rate revenue now exceeds $30 billion -- more than triple the approximately $9 billion figure from late 2025.
That kind of demand devours compute at a staggering pace. Every query, every API call, every enterprise deployment runs through chips that Anthropic currently rents or buys from partners who also happen to be competitors in the AI race. Designing in-house hardware would give the company tighter control over performance, cost, and supply.
The price tag for entering the custom chip business is steep. Industry estimates put the cost of designing a single advanced AI chip at around $500 million. That figure covers hiring highly specialized semiconductor engineers and funding the rigorous testing required to ensure a defect-free manufacturing pipeline. It's a serious capital commitment, even for a company posting $30 billion in annualized revenue.
Anthropic is hardly alone in eyeing this path. Meta has been developing custom AI accelerators for its own data centers. OpenAI has explored chip design partnerships. The logic is similar across all of them: when your core product depends on a finite supply of specialized hardware, owning the design gives you leverage that no purchase order can match.
Whether Anthropic presses ahead or shelves the idea remains uncertain. Two people familiar with the plans emphasized that the company has made no firm decision. But the mere fact that the conversation is happening reveals how seriously Anthropic takes the hardware bottleneck -- and how much the economics of AI have changed in just twelve months.