Anthropic built a model too dangerous to ship, yet
Market Updates

Anthropic built a model too dangerous to ship, yet

The Deep View25d ago

Anthropic's most advanced model is finally here, just not in the way you might expect.

On Tuesday, Anthropic unveiled Project Glasswing, an initiative to secure critical software in the age of AI, as attacks become increasingly sophisticated and prevalent. Anthropic cited the launch of Claude Mythos, a model more advanced than its current top-tier model, Opus, and the far-reaching implications it carries for cybersecurity as the driving force behind forming the initiative.

"Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," said the company in the blog post.

For this initiative, leaders across industries are joining Anthropic, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The plans for Project Glasswing include:

Cross-industry collaboration is essential to achieving a comprehensive understanding of the threat landscape, a point underscored by Steve Schmidt, SVP & Chief Security Officer at Amazon.

"The software that needs to be examined is literally everything across all of the different industries, and so we have to work together, because we don't have it all," Schmidt told The Deep View.

Mythos Preview has already found thousands of "high-severity vulnerabilities" across "every major operating system and web browser." The model outperforms Claude Opus 4.6 in vulnerability reproduction, scoring 83.1% vs. 66.6% on CyberGym, driven by stronger agentic coding and reasoning abilities, as reflected across benchmarks such as SWE-bench Pro and Terminal Bench 2.0.

The Claude Mythos preview will not be made generally available, though the company shares the goal of one day releasing models of that scale to the public. Ultimately, the initiative reflects Anthropic's commitment to its core mission of responsible AI deployment, a sentiment echoed by Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, who told The Deep View.

"I think it's important to think about the fact that they [Anthropic] did this because they had a new model, and they understood there were implications to this model that they didn't fully appreciate, and they wanted to almost get a peer review to understand what are we dealing with here, and I think that that is a really responsible way to address that problem," said Meyers.

The AI race earns its name: labs are locked in fierce competition to release ever more powerful models. The risk, however, is that these advances are consistently outpacing our ability to responsibly anticipate and apply the appropriate guardrails. Releasing models that the world isn't yet equipped to handle carries serious consequences, particularly when bad actors enter the equation, and the potential for unprecedented harm is real. That Anthropic chose to pump the brakes despite the significant enterprise revenue at stake offers a measure of reassurance that responsibility, not greed, can still win out.

Originally published by The Deep View

Read original source →
Anthropic