Users gained unauthorised access to Anthropic's Mythos AI in security breach; report reveals its capabilities- Moneycontrol.com
Market Updates

Users gained unauthorised access to Anthropic's Mythos AI in security breach; report reveals its capabilities- Moneycontrol.com

MoneyControl1d ago

Access involved a Discord group and a third-party contractor.

Anthropic's Mythos AI model, described as a high-risk cybersecurity system, has reportedly been accessed by a small group of unauthorised users. The development, first reported by Bloomberg, raises concerns around access control and the risks of exposing advanced AI tools through third-party environments.

How the access occurred

According to the report, the breach involved individuals linked to a private online forum, including a person identified as a third-party contractor for Anthropic. The group is said to have used a mix of contractor-level access and commonly available internet tools to gain entry into the system.

The users are believed to be part of a Discord-based community that tracks unreleased AI models. They reportedly used knowledge of Anthropic's system formats, obtained from a previous data breach, to make an educated guess about where the Mythos model was hosted.

What Mythos is capable of

Claude Mythos Preview is a general-purpose AI model with cybersecurity capabilities. Anthropic has stated that the system can identify and exploit vulnerabilities across major operating systems and web browsers when instructed by users.

Due to the nature of these capabilities, access to Mythos is limited under the company's Project Glasswing initiative. Selected partners include Nvidia, Google,

Amazon Web Services, Apple, and Microsoft, while governments are also exploring its potential use. Anthropic has not announced plans for a public release, citing the risk of misuse.

Timeline and usage

The unauthorised access reportedly began on April 7, the same day Anthropic announced limited testing of the model. The group has reportedly continued to use Mythos since gaining entry, sharing screenshots and demonstrations as evidence.

Reports indicate that the users avoided using the model for cybersecurity-related tasks to reduce the chances of detection. The group is also said to have accessed other unreleased Anthropic AI models.

Company response

Anthropic has confirmed that it is investigating the incident. The company stated that there is currently no evidence suggesting its core systems have been compromised or that the breach extends beyond a third-party vendor environment.

Originally published by MoneyControl

Read original source →
AnthropicDiscord