Anthropic: US statecraft battles go domestic
Market Updates

Anthropic: US statecraft battles go domestic

fdiintelligence.com4/1/2026

The US government's showdown with Anthropic over its blacklisting following a dispute regarding the AI start-up's safety guardrails has become a flashpoint in the Trump administration's use of national security justifications to push the limits of executive power.

The Pentagon's designation of Anthropic and its Claude chatbot as a supply-chain risk, following a dispute over the technology's use in mass domestic surveillance and autonomous weapons, has pushed to new levels scrutiny of the government's use of national security justifications to achieve other objectives.

Anthropic won a first-round court battle on March 26 when a federal judge temporarily blocked the designation, describing it as an apparent attempt to "punish" the AI lab which doesn't align with its "stated national security interests".

It comes just one month after the US Supreme Court struck down the president's use of national security powers to impose the so-called "liberation day" tariffs.

Anthropic is the latest example of the Trump administration's willingness to intervene in the private sector, the best known example being its new shareholdings in companies in strategic sectors. Washington insiders say the AI dispute has pushed fears to the top of the agenda at other firms considering doing business with the federal government.

"Lots of companies are worried that if they're going to engage [with] the Trump administration and seek a particular policy . . . the price may be an equity stake or some other concession they have to give to the executive branch," says Jim Secreto, a former treasury and commerce department official.

Some of Silicon Valley's biggest tech firms have rallied behind Anthropic, in contrast to their reluctance to contest unorthodox policies during the first year of Trump 2.0.

At the centre of the dispute is Anthropic's unwavering position that its frontier AI model cannot be used to power fully autonomous weapons or be used in mass surveillance within the US.

The Pentagon initially agreed to these terms in a contract struck last year, which made the $380bn start-up the first AI lab to deploy its models in the US military's most sensitive systems. But in the following months it sought to renegotiate the safeguards, insisting the government can use tools in its contract however it sees fit.

Anthropic refused to back down, prompting the government to follow through on its public threats, which made Anthropic the first US company categorised as a supply-chain risk. The firm's CEO Dario Amodei labelled the move "retaliatory and punitive" given the Pentagon could have simply walked away from the contract.

This use of a tool intended for companies from countries of concern -- most famously Huawei in 2019 -- to curb systemic security risks in government supply chains has shocked the country's national security community.

"These authorities were created with . . . malevolent foreign activity in mind," says Kat Duffy, senior fellow at the Council on Foreign Relations. "The idea that they would be deployed against a leading US AI company is frankly astonishing."

The lower court's injunction won't take effect until April 2. Irrespective of how the dispute evolves, it is being seen as reaching a new level of coercion by the Trump 2.0 administration. The president and his inner circle have used threats and funding cuts to influence industries from higher education to law firms. Chipmakers Nvidia and AMD have been granted export licences in exchange for the government taking a cut of the revenues.

But Anthropic is the first notable use of domestic coercion via national security tools. "Turning these really powerful weapons against a US company is remarkable," says Abraham Newman, a political scientist and professor at Georgetown University. It brings into the domestic domain strategies usually reserved for international statecraft, most notably tariff threats against Europe over digital service taxes and Greenland's sovereignty.

Experts warn that transactional use of national security policies weakens their credibility and effectiveness. "[When] there's a real concern that some of the government's actions . . . are not legitimate or if the government is willing to negotiate, such as with TikTok or Nippon Steel, you're going to get more of them willing to push the envelope," says Sarah Bauerle Danzman, a professor of international studies at Indiana University.

The consequences of the Anthropic dispute are already reverberating through the private sector. Its supply-chain risk designation prevents all companies from using its tools in any work for the defence department, which is a challenge given its wide adoption by legacy and next-generation defence players like Palantir.

If the government ultimately prevails, Emily Benson, head of strategy at intelligence firm Minerva, believes the "biggest problem . . . is the pervasiveness of Anthropic being used by other companies in and around the defense industrial base and advanced technologies".

Defence secretary Pete Hegseth is using social media to push companies with Pentagon contracts to stop using Anthropic in all their activities, while other federal agencies have indicated they are following Trump's directive to phase out Anthropic's technology from internal operations

Anthropic estimates the blacklisting cost it billions of dollars in revenues this year alone. Duffy describes the move as a "power play aimed at destroying the company", and is among those expecting other firms to think "very carefully" about becoming government suppliers as "there are unique political risks now that did not exist before".

The Anthropic episode marks a new level of pushback from the private sector against the second Trump administration.

The day before the district court ruling, spokespeople for Microsoft (which filed an amicus brief in support of Anthropic's lawsuit), AWS and Google told fDi they continue to work with Anthropic and offer Claude via its platforms on non-defence related projects. OpenAI, Anthropic's primary competitor which quickly struck a preliminary agreement with the Pentagon to replace Amodei's firm, has been less vocal but issued a statement disagreeing with the supply-chain risk designation.

"Given how reluctant so many private sector leaders have been to criticise this administration on anything notable, it's a sign that they are quite nervous about how far [it] might go in punishing private sector companies for political acts," says Geoffrey Gertz, senior fellow at the Center for a New American Security.

The difference between the government's actions against Anthropic as opposed to companies in other sectors is, of course, the scale of novel risks presented by frontier AI. Dangers linked to its use in mass surveillance and autonomous weapons challenge tolerance thresholds in ways unrivalled by government equity stakes or funding.

For a country intent on winning the AI race, the administration's actions could also present ramifications for its geoeconomic goals. "There are huge risks to see the government . . . limiting the market for AI in dangerous ways by showing it will go against [US] AI companies over national security," says Gertz. "That is going to cause real questions in the market in terms of where, ultimately, the US is going on AI."

Originally published by fdiintelligence.com

Read original source →
Anthropic