
Cryptocurrency companies are circling Anthropic, the maker of the Claude family of artificial intelligence models, in what amounts to a slow-motion campaign to gain influence over one of the most consequential AI firms in the world. The effort, which involves attempts to acquire shares on secondary markets, invest in new funding rounds, and build commercial partnerships, is raising alarms inside Anthropic and among its allies in Washington. The stakes are enormous -- not just for the future of AI safety research, but for the geopolitical competition between the United States and its adversaries over who controls the next generation of foundational technology.
The push was first detailed by The Information, which reported that multiple crypto-linked entities have sought access to Anthropic's cap table or attempted to forge deals that would give them a seat at the table. Some of these overtures have come through intermediaries; others have been more direct. Anthropic has rebuffed a number of these approaches, but the persistence of the interest -- and the scale of capital behind it -- has forced the company to shore up its defenses in ways that go well beyond typical Silicon Valley corporate governance.
Why crypto? And why now?
The answer lies at the intersection of two industries that have spent the last several years accumulating vast pools of capital while facing intensifying regulatory scrutiny. Crypto firms, many of which are sitting on billions of dollars in liquid assets after the post-2022 recovery in digital asset prices, see AI as the next frontier. They want exposure to the most promising private AI companies, and Anthropic -- valued at roughly $60 billion after its most recent funding round -- sits near the top of every list. The company's Claude models have become genuine competitors to OpenAI's GPT series, and its research team, led by former OpenAI VP of Research Dario Amodei, is widely regarded as among the most talented in the field.
But Anthropic isn't just any AI company. It was founded explicitly around the idea that AI development needs to be conducted with extraordinary care. Its corporate structure reflects that ethos. Anthropic is organized as a public benefit corporation, a legal form that allows its board to weigh societal impact alongside shareholder returns. The company has also established a Long-Term Benefit Trust, a governance mechanism designed to ensure that safety considerations can't be easily overridden by investors seeking maximum financial returns. These structures are unusual, and they're the very features that make crypto interest so fraught.
The concern, as multiple people familiar with Anthropic's thinking have described it, is that crypto-linked investors could introduce governance pressures that conflict with the company's safety mission. Crypto culture, broadly speaking, prizes decentralization, speed, and minimal regulation -- values that sit in tension with Anthropic's deliberate, cautious approach to deploying powerful AI systems. There's also the matter of reputation. Anthropic has worked hard to position itself as the responsible actor in a field that Washington is watching closely. An influx of crypto money could complicate that narrative at a moment when AI regulation is being actively debated in Congress and at federal agencies.
This isn't hypothetical. The crypto industry's track record with governance is, to put it charitably, mixed. The collapse of FTX in late 2022 -- and the subsequent criminal conviction of its founder, Sam Bankman-Fried -- remains fresh in the minds of regulators, lawmakers, and the broader tech industry. Bankman-Fried's venture arm, FTX Ventures, had actually invested in Anthropic before FTX imploded. That investment ended up in bankruptcy proceedings and was eventually sold, but the episode underscored how crypto capital can bring unwanted complications.
And the current moment is different from 2022 in important ways. The crypto industry has regrouped. Bitcoin has surged past $100,000. Major players like Coinbase, a]16z crypto, and a constellation of crypto-native funds are flush with capital and looking for the next big deployment. AI, with its insatiable appetite for compute and capital, is a natural target. Several crypto firms have already made investments in AI startups, and the boundary between the two industries is blurring fast -- particularly as concepts like decentralized AI training and on-chain inference gain traction in certain corners of the tech world.
Anthropic's response has been to tighten its shareholder agreements and exercise greater control over secondary sales of its stock. The company has reportedly added new restrictions that give it the right to block transfers of shares to buyers it deems incompatible with its mission. This is a significant move. Secondary markets for shares in hot private companies are a major source of liquidity for early employees and investors, and restricting those transactions can create friction. But Anthropic appears willing to accept that tradeoff.
The company's posture here mirrors a broader trend among elite AI labs. OpenAI, which recently completed a complex corporate restructuring, has also grappled with questions about who should be allowed to own its equity. Google DeepMind, by virtue of being wholly owned by Alphabet, doesn't face the same cap table pressures, but it has its own governance complexities. The point is that the question of who funds AI -- and what influence that funding confers -- has become one of the central strategic questions in the industry.
There's a geopolitical dimension too. The U.S. government has grown increasingly focused on ensuring that frontier AI capabilities don't end up in the hands of adversarial nations or entities that might facilitate such transfer. The Commerce Department's Bureau of Industry and Security has tightened export controls on advanced chips. The Treasury Department's Committee on Foreign Investment in the United States, known as CFIUS, has expanded its scrutiny of investments in AI companies. And bipartisan legislation has been introduced that would give the federal government greater authority to review and block foreign investments in AI firms.
Crypto complicates this picture. The industry is global by design. Many crypto firms operate across multiple jurisdictions, with complex corporate structures that can obscure ultimate beneficial ownership. Some of the entities that have expressed interest in Anthropic shares have ties to investors or principals based outside the United States, according to people briefed on the matter. That doesn't make them hostile actors -- but it does make due diligence harder, and it raises the kind of questions that CFIUS was created to answer.
Anthropic CEO Dario Amodei has been vocal about the national security implications of AI development. In a lengthy essay published last year, he argued that the United States and its democratic allies need to maintain a lead in AI capabilities, and that this lead could be decisive in shaping the trajectory of the 21st century. He's also been a frequent visitor to Washington, meeting with lawmakers and administration officials to discuss AI safety and governance. Allowing crypto firms with opaque ownership structures onto his company's cap table would undercut that message.
So Anthropic is playing defense. But it's also playing offense. The company recently closed a massive funding round that valued it at approximately $60 billion, with participation from investors including Google, Spark Capital, and Salesforce Ventures. That round gave Anthropic a substantial war chest -- enough to fund the enormous compute costs associated with training next-generation models without needing to accept capital from sources it considers problematic. The size of the round was itself a strategic statement: Anthropic has enough demand from blue-chip investors that it doesn't need to open the door to anyone.
Still, the secondary market remains a vulnerability. Early employees and seed-stage investors who hold Anthropic shares may be eager to cash out, and crypto buyers are often willing to pay premium prices. The economics are straightforward: a crypto fund sitting on billions in appreciated digital assets can afford to be aggressive in bidding for shares in what it perceives as the next transformative technology company. For a mid-level Anthropic engineer sitting on illiquid equity, a buyer offering a 20% premium over the last funding round's price is hard to turn down.
This dynamic is playing out across the private AI market, not just at Anthropic. Secondary market platforms like Forge Global and EquityZen have reported surging interest in AI company shares, with crypto-linked buyers representing a growing share of demand. The trend has prompted several AI companies to update their transfer restriction policies, though few have been as aggressive as Anthropic in doing so.
The tension between crypto and AI safety isn't purely philosophical. It has practical implications for how AI models are deployed. Anthropic has been notably cautious about releasing its most capable models, often imposing usage restrictions and safety filters that some commercial customers find frustrating. A crypto-influenced board or investor base might push for faster, less restricted releases -- particularly if those models could be integrated into decentralized applications or used to power autonomous agents in DeFi protocols. The commercial opportunity is real, but so are the risks.
And the risks aren't just theoretical. Recent months have seen a proliferation of AI-powered scams and fraud schemes in the crypto space, from deepfake impersonations of exchange executives to AI-generated phishing campaigns targeting wallet holders. The intersection of powerful generative AI and pseudonymous financial systems creates a surface area for misuse that neither industry has fully reckoned with. Anthropic, which has invested heavily in constitutional AI and other alignment techniques, is acutely aware of these dangers.
There's an irony here. Crypto evangelists often talk about building trustless systems -- infrastructure that works without requiring anyone to trust anyone else. Anthropic's entire thesis is that trust matters enormously, that the people and institutions building AI need to be trustworthy, and that governance structures must be designed to preserve that trust even under enormous financial pressure. These are fundamentally different worldviews, and they don't reconcile easily.
None of this means that every crypto firm interested in Anthropic is a bad actor. Many are legitimate, well-governed enterprises run by sophisticated investors who understand the importance of AI safety. Andreessen Horowitz's crypto fund, for instance, is backed by one of Silicon Valley's most established venture firms. Coinbase is a publicly traded company subject to SEC oversight. The issue isn't that crypto money is inherently tainted -- it's that the sector's structural characteristics make it harder for Anthropic to conduct the kind of thorough vetting its mission demands.
The situation also highlights a broader challenge facing private AI companies: the mismatch between the capital they need and the capital they want. Training a frontier AI model now costs hundreds of millions of dollars, and the next generation of models may cost billions. That kind of money doesn't grow on trees. It comes from sovereign wealth funds, big tech companies, and -- increasingly -- from industries like crypto that have accumulated massive pools of investable capital. Saying no to all of it requires either finding enough capital elsewhere or accepting slower growth. Anthropic, for now, has chosen the former.
How long that remains viable depends on the competitive dynamics of the AI industry. If the cost of staying at the frontier continues to escalate -- and every indication suggests it will -- even well-funded companies may face pressure to broaden their investor base. The question then becomes whether Anthropic's governance structures are strong enough to absorb new types of investors without compromising its safety mission. The Long-Term Benefit Trust is designed to provide exactly that kind of resilience, but it hasn't been tested under real adversarial pressure.
For the crypto industry, the interest in Anthropic is part of a larger strategic bet. The firms pushing hardest for access believe that AI and blockchain will converge in ways that create entirely new categories of applications -- autonomous economic agents, decentralized model marketplaces, on-chain verification of AI outputs. Getting exposure to a company like Anthropic isn't just a financial trade; it's a way to position for what these firms see as an inevitable fusion of the two technologies.
Whether that fusion happens -- and whether it's desirable -- is another question entirely. But the battle over Anthropic's cap table is a proxy for something larger: a contest over who gets to shape the development of artificial intelligence at its most critical juncture. The money flowing toward AI isn't just capital. It's influence. And influence, once granted, is very hard to take back.
Anthropic knows this. That's why it's building walls.