
Anthropic's new cybersecurity AI model, Mythos, which was supposed to be accessible to a small group of partner companies, has reportedly been accessed by a group of unauthorized users. That's according to Bloomberg, which reported that the group has managed to get early access to the model via a third-party vendor.
Unauthorized users reportedly accessed Mythos early
Per the report, a private online forum is behind the unauthorized access. Speaking with TechCrunch, a spokesperson for Anthropic confirmed that the company is investigating the claim, saying the issue is related to its Claude Mythos Preview.
"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments."
As of now, the company says there is no evidence that its internal systems were directly affected.
However, the unauthorized access part itself is what stands out here. The group is said to have used connections linked to a contractor working with Anthropic. From there, they tried multiple methods before successfully getting into the system.
Discord group used model after launch
Members of the group are reportedly part of a Discord community focused on tracking unreleased AI tools. They allegedly gained access on the same day Mythos was announced and have been using it since. According to the report, the group figured out where the model might be hosted based on patterns from earlier Anthropic deployments. They then tested that assumption until it worked.
Interestingly, the intent does not appear malicious, at least for now. The group is said to be more interested in exploring new models rather than causing damage.
At the time Mythos was announced, Anthropic said that it was shared with a small group of partners, including companies like Apple, under Project Glasswing. The company wanted to keep access limited as it warned about its misuse if it falls into the wrong hands.
Well, that plan now looks a bit shaky, even if Anthropic's core systems remain untouched, third-party access points are tough to lock down.