Opinion | Anthropic and Hegseth Need a Truce
Market Updates

Opinion | Anthropic and Hegseth Need a Truce

The Wall Street Journal29d ago

Memo to Defense Secretary Pete Hegseth: Iran is the enemy, not Anthropic, which is helping you defeat Iran.

Memo to Anthropic CEO Dario Amodei: Iran is the enemy, not the U.S. government.

Now that Anthropic has won a preliminary injunction blocking the Pentagon's claim that it is a "supply-chain risk," maybe the two sides can set aside their enmity and call a truce for the good of the country.

President Trump posted on social media late last month that he wanted Anthropic exiled from government contracts because it had gone "woke." The Pentagon followed by declaring the company a "supply-chain risk," as if Anthropic's Claude AI model is a sleeper agent like a Chinese telecom firm.

Anthropic fought back in court, and federal Judge Rita Lin on Thursday sided with the company and issued the injunction. The Pentagon has vowed to appeal, but it would be better if it worked things out with the company.

Judge Lin ruled that the Pentagon's actions "do not appear to be directed at the government's stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude," Anthropic's large language model. "Instead, these measures appear designed to punish Anthropic," she wrote. The Defense Department's memo came down a week after Mr. Trump ordered the ban, which suggests a pretextual decision.

The Judge also said the ban appears to be retaliation in violation of the First Amendment for the company's political views. Mr. Amodei supported Kamala Harris in 2024. Giving support to the company's viewpoint claim, White House AI czar David Sacks wrote last year that the "real issue" around Anthropic is its "agenda to backdoor Woke AI and other AI regulations through Blue states like California."

This brawl is unfortunate all around -- not least for U.S. security. The "supply-chain risk" label has rippling consequences. A "federal contractor with whom Anthropic has built custom applications has indicated that it may suspend that work or even remove Claude from existing deployments," the company says in its lawsuit.

Mr. Amodei has said Anthropic doesn't want its models deployed for so-called domestic "mass surveillance" and in fully autonomous weapons systems. But neither is happening now and isn't likely any time soon. Mr. Amodei protests too much.

Yet there was no reason for the Administration to go to DEFCON1. Members of Congress were moving to find a legislative solution to Anthropic's desire not to have Claude contribute to "mass surveillance" of Americans, which isn't happening in any case.

A bipartisan letter from Sens. Mitch McConnell, Roger Wicker, Chris Coons and Jack Reed, the leads of the Senate's defense committees, warned against a hasty Pentagon decision and said that the U.S. "cannot afford to take on any preventable risk that would give our adversaries, particularly China, an edge" in tech.

One of AI's most promising features is helping American forces fuse a vast array of information and select targets. That is crucial to the campaign in Iran, as the military must eliminate Iranian missiles and launchers before Iran can return fire. The Iran conflict is modest compared to a war over the Taiwan Strait. The U.S. military would be dealing with an order of magnitude more targets -- and a People's Liberation Army exploiting its own AI.

After Anthropic's purge, OpenAI moved in and allayed its surveillance concerns to work with the Pentagon. But by most accounts Claude is more advanced now than what either OpenAI or Elon Musk's Grok model can deliver to the Pentagon. Anthropic's legal victory is an opportunity for both Mr. Amodei and the Pentagon to work out a compromise.

Originally published by The Wall Street Journal

Read original source →
Anthropic