
Anthropic filed a lawsuit in a California federal court, alleging that Defense Secretary Pete Hegseth exceeded his authority by labeling the company a national security supply-chain risk
LOS ANGELES, California: Anthropic got some relief, albeit temporary, on March 26 in its case against the U.S. government when a federal judge blocked the Pentagon's blacklisting of the AI company.
This was the latest turn in the Claude maker's high-stakes confrontation with the U.S. military over AI safety on the battlefield.
Anthropic filed a lawsuit in a California federal court, alleging that Defense Secretary Pete Hegseth exceeded his authority by labeling the company a national security supply-chain risk. This label is usually used for companies that could expose military systems to hacking or sabotage.
Anthropic said the government punished it for its views on AI safety, violating its First Amendment right to free speech. It also said it was not given a chance to challenge the decision, which violated its Fifth Amendment right to due process.
U.S. District Judge Rita Lin agreed with Anthropic in a detailed ruling but delayed the effect of her decision for seven days so the government can appeal.
The dispute began after Anthropic refused to let the military use its AI chatbot, Claude, for surveillance or autonomous weapons. Because of this, the company was blocked from some military contracts, which it says could cost billions of dollars and damage its reputation.
Anthropic argued that AI was not reliable enough for use in weapons and opposes domestic surveillance on rights grounds. However, the Pentagon said private companies should not limit military decisions.
In her ruling, Judge Lin said the government's actions seemed more like punishment than a move to protect national security. She wrote that Anthropic appeared to be targeted for publicly criticizing the government, calling it illegal retaliation against free speech.
An Anthropic spokesperson said the company was pleased with the decision and remained committed to working constructively with the government to promote safe and reliable AI.
This is the first time a U.S. company has been publicly labeled a supply-chain risk under a little-known law meant to protect military systems from foreign threats.
Anthropic's lawsuit, filed on March 9, said the decision was unlawful, not based on facts, and went against the military's earlier praise of Claude.
The Justice Department argued that Anthropic's refusal to change its restrictions could create confusion for the Pentagon and even risk disrupting military systems during operations. The government said the decision was about contract terms, not the company's views.
Anthropic is also fighting a second case in Washington, D.C., over a similar designation that could stop it from getting civilian government contracts.