
NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic.
Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting.
Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm.
A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling.
The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns.
Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately.
A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety.
Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration.
In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process.
The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude.
The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.
The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety.
The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process.
The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems.
(This story has been refiled to correct the dateline to April 8)
Reporting by Jack Queen in New York; Editing by Noeleen Walder and Cynthia Osterman
Our Standards: The Thomson Reuters Trust Principles., opens new tab
Suggested Topics:
Government
Constitutional Law
Civil Rights
Appellate
Public Policy
Jack Queen
Thomson Reuters
Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.