Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe
Company Updates

Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe

punemirror.com14h ago

Anthropic Mythos AI is at the centre of a growing security storm after reports that a small group of unauthorised users quietly gained access to the powerful cybersecurity model via a third-party vendor.

Anthropic recently unveiled Mythos as a specialised Claude-based AI model designed to help major organisations detect software vulnerabilities and respond to cyber threats more quickly than human teams.

The system is being trialled under Project Glasswing, an initiative that gives select partners, including Apple and other large technology and financial firms, access to the Anthropic Mythos AI preview for defensive security work.

Anthropic has said Mythos can find thousands of high‑severity bugs across major operating systems and web browsers, underscoring why tight control over Anthropic Mythos AI access is seen as critical.

According to Bloomberg and other outlets, a small private group on the Discord platform began using Anthropic Mythos AI on the very day it was publicly announced.

Members reportedly combined credentials linked to a contractor working with Anthropic and open internet sleuthing tools to locate and access the Claude Mythos Preview environment.

Bloomberg's reporting suggests the group shared screenshots and even a live demonstration of Anthropic Mythos AI to support their claims, while avoiding overtly cybersecurity‑related prompts in an apparent effort not to trigger alarms.

Anthropic has confirmed it is investigating the incident, telling TechCrunch: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third‑party vendor environments," and adding that there is currently no evidence its own systems have been compromised.

The company also says there is no sign that activity went beyond the affected vendor, but the Anthropic Mythos AI scare is likely to intensify scrutiny of how unreleased, high‑risk AI models are tested and secured before wider deployment.

For now, Anthropic Mythos AI remains in restricted preview, yet the alleged leak shows that even tightly controlled frontier systems can be probed and exposed at the edges, a warning that defensive AI may only be as strong as the weakest partner handling it.

Originally published by punemirror.com

Read original source →
AnthropicDiscord