Anthropic lets select firms test its most powerful cybersecurity model, Mythos AI
Company Updates

Anthropic lets select firms test its most powerful cybersecurity model, Mythos AI

storyboard18.com25d ago

Anthropic on Tuesday unveiled a preview of its new frontier model, Mythos, but with a catch. The company is not releasing it publicly. Instead, it is placing the model in the hands of a small group of partner organisations to test its capabilities in securing software systems.

The rollout is part of a new initiative called Project Glasswing, under which a limited set of partners will use the model for defensive cybersecurity work. Anthropic described Mythos as one of its most powerful systems so far, designed with strong coding and reasoning capabilities.

Project Glasswing and partner access

As part of Project Glasswing, 12 partner organisations, including Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft and Palo Alto Networks, are working with the model to detect and fix vulnerabilities in critical software infrastructure. These partners are expected to share their findings with the broader industry.

Anthropic has also extended access to more than 40 additional organisations that maintain or build critical systems. These groups will use the model to scan both proprietary and open-source software for weaknesses.

Vulnerability detection and capabilities

According to the company, Mythos has already identified thousands of vulnerabilities, including many high-severity and zero-day flaws. Several of these issues date back one to two decades, with some even older. One of the oldest vulnerabilities detected was a 27-year-old flaw in widely used systems.

Anthropic said recent advances in AI have enabled models to match or exceed highly skilled humans in identifying and exploiting software weaknesses. The company noted that such capabilities could pose risks if misused, making controlled deployment critical.

Limited release and safety concerns

The company clarified that Mythos will not be made available to the general public at this stage. Instead, the focus remains on developing safeguards that can prevent misuse while allowing organisations to deploy similar systems safely at scale in the future.

Anthropic said it has been in discussions with US government officials regarding the model's capabilities, even as it remains involved in a legal dispute after being labelled a supply-chain risk over disagreements on AI safety practices.

Background and leak

Details of Mythos surfaced earlier through a data leak, which the company attributed to human error. The leaked material described the model as significantly more capable than its previous systems, particularly in areas such as coding, reasoning and cybersecurity.

Anthropic also reported a separate incident where internal code was unintentionally exposed, leading to disruptions on code-hosting platforms during cleanup efforts.

Originally published by storyboard18.com

Read original source →
Anthropic