With Mythos, Anthropic Deserves Support, Not a Blacklisting
Market Updates

With Mythos, Anthropic Deserves Support, Not a Blacklisting

Bloomberg Business13d ago

Artificial intelligence firm Anthropic PBC did something few companies would ever consider: It spent billions to develop a product significantly better than any rival in the world's hottest market...and refused to release it. The new model, called Mythos, has unprecedented cybersecurity capabilities. It found a 27-year-old bug in OpenBSD, one of the most security-hardened operating systems on earth. It uncovered a 16-year-old flaw in the video encoder FFmpeg that had survived five million automated security tests without detection.

But instead of selling access, Anthropic launched Project Glasswing, a defensive consortium with Amazon.com Inc., Apple Inc., Microsoft Corp., CrowdStrike Holdings Inc. and others, committed $100 million in credits, and briefed federal agencies on the risks. Mythos is too dangerous to deploy publicly and too valuable to lock away.

Mythos is new, but we've seen this problem before. In 2009, I published a paper with Kenneth Oye and Scott Mohr on the security implications of synthetic biology, which has the potential to revolutionize both bioweapons and biodefenses. The core challenge was identical: a powerful dual-use capability diffusing through private actors, where advancing technology was initially empowering attackers more than defenders. A biologist named Roger Brent named the problem the Valley of the Shadow of Death.

Brent's insight was simple and uncomfortable. The advance of biotechnology was helping attackers by making it easier to create, modify, and weaponize pathogens. But eventually our mastery of biotechnology will progress to the point where we will be able to make defenses so powerful any attack is futile. The problem is the gap, and the only solution is to get through the valley as fast as possible.

Cybersecurity has entered the valley. Mythos created working exploits on the first attempt 83% of the time. It solved a corporate network attack simulation that would have taken a human expert more than 10 hours. In one evaluation it escaped its own secured sandbox and sent an email to the researcher running the test. Mythos isn't even a cybersecurity specialist. These capabilities just emerged as a byproduct of improvements in reasoning and coding.

Anthropic's own assessment is that competitors' models with similar power are six to 18 months away. The number of people and states with sophisticated cyberattack capabilities is about to get much larger. That's exactly what made synthetic biology risky: Capabilities once reserved for elite practitioners were about to spread.

The usual comparison for this kind of risk is nuclear weapons. But they have a choke point: fissile material. Restrict access to it, and the problem is largely solved. Biological threats, like cyber ones, spring from commercially available tools. You need to be a country to build a nuclear weapon. The pool of people who can manipulate biological systems or exploit software vulnerabilities grows every year. This requires an entirely different strategy.

Our biosecurity research pointed to four approaches: community norms among practitioners; regulation at the firms through which dangerous capabilities pass; accelerated defensive research; and designing safety into foundational technologies. Of these, community norms are both the most important and the most fragile. Regulation can be evaded. Safety features can be reverse-engineered. But a professional culture in which practitioners internalize responsibility for the consequences of their work is the deepest barrier to misuse. That kind of culture depends on a simple bargain: Those who hold the line must believe that holding the line is valued, or at least not punished.

Project Glasswing is Anthropic's attempt to sprint through the valley. It withheld Mythos from general release. It briefed CISA and other agencies before launch. It committed $100 million and recruited the companies that maintain the world's most critical software to find and fix vulnerabilities before attackers exploit them. And it is developing safeguards on less powerful models first, refining controls before scaling Mythos-class capabilities. Anthropic's own red team report describes the logic: Once the security landscape reaches a new equilibrium, defenders will benefit more than attackers. The biosecurity community has spent decades begging biotech companies to behave like this. It's not a solution, but it's a good beginning.

And Defense Secretary Pete Hegseth wants it to stop. Anthropic refused to allow the Pentagon to use its AI systems for mass domestic surveillance or fully autonomous weapons systems before the technology was ready. In response, Hegseth blacklisted Anthropic as a supply chain risk. That's how you treat companies owned by the Chinese government. It's the only American company ever labeled this way. It would block any defense contractor from using Anthropic in any work they do with the Pentagon.

A federal judge found the action likely violated the law and issued a nationwide stay, although a three-judge appeals panel refused to overturn Hegseth's decision. Hegseth's approach to the AI-centric world is to forbid the government from working with the people who understand it best. I'm not sure going blindfolded into battle is the best expression of the warrior ethos. But the real damage is not to one company. It is to the norm itself.

Anthropic's refusal to grant the Pentagon unrestricted use of its models was not a quirk of corporate policy. It was exactly the kind of community norm that biosecurity researchers identified as the most important barrier to misuse of dual-use technologies. The company drew red lines against mass surveillance and fully autonomous weapons not because any law required it but because its leaders believed those uses were wrong. That is what a community norm looks like when it's working.

Anthropic's business is thriving. Its annualized revenue run rate roughly doubled between February and April, from $14 billion to $30 billion. The company's chief commercial officer has said that customers respect that it "demonstrates its principles." But more than 100 enterprise customers have said they may no longer be able work with Anthropic because of the legal threats against it, and it has said that those sanctions may cost it billions in revenue.

The message to every other AI lab is to put responsibility at your core and you might end up at war with the Trump administration. But if you're comfortable with ethical compromises, the government will embrace you. OpenAI and xAI agreed to let the Pentagon use their models for any lawful purpose, with OpenAI claiming they have added safeguards. The next company to build a Mythos-class model will think very hard about being so careful with it. If Mythos really has the capabilities that Anthropic claims, then it's simply too dangerous to be in unregulated private hands. The problem is that this White House seems to be trying to make the valley broader and deeper.

Sign up for the Bloomberg Opinion bundle

Sign up for the Bloomberg Opinion bundle

Sign up for the Bloomberg Opinion bundle

Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today.

Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today.

Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today.

Plus Signed UpPlus Sign UpPlus Sign Up

By continuing, I agree to the Privacy Policy and Terms of Service.

There's no way to bypass the valley. The only way out is through, and that means creating better defenses, faster. We need to support responsible companies, not blacklist them. We also need to build the regulatory infrastructure to manage dual-use AI the way the best biosecurity policy manages dual-use biology: through chokepoint oversight, professional norms, defensive investment, and safety by design. Every month spent punishing the defenders instead of empowering them is another month trapped in the valley.

More From Bloomberg Opinion:

  • US Cybersecurity Cutbacks Come at Exactly Wrong Time: Dave Lee

  • Anthropic, OpenAI Talk Safety. Headcounts Don't: Parmy Olson

  • The Pentagon Is Thwarting American Genius: Gautam Mukunda

Want more Bloomberg Opinion? OPIN . Or subscribe to our daily newsletter.

Originally published by Bloomberg Business

Read original source →
xAIAnthropic