
The group is said to be a part of a private Discord community that hunts for information about unreleased AI models.
Earlier this month, Anthropic released a preview of what it described as its "most powerful model yet," called Mythos, which it said to have advanced cybersecurity capabilities. Experts and even Anthropic itself have warned that the model could be extremely dangerous in the wrong hands, potentially enabling severe cyberattacks faster than companies can respond.
That concern is partly why the company opted for a limited rollout of the model to major technology and financial institutions under an initiative called Project Glasswing. Since the public reveal of the model, it has created a frenzy among security experts and U.S. government officials. Reports say the technology has even prompted emergency discussions between officials and major Wall Street banks some days ago.
But despite the tight restrictions Anthropic placed on access to the model, a small group of outsiders reportedly gained entry anyway.
According to a report from Bloomberg, a handful of users in a private online forum managed to access Mythos. The access allegedly occurred on the same day the model was announced for limited testing, though details are only now coming to light. The information came from an individual familiar with the situation, who reportedly provided screenshots and a live demonstration of the model to verify the claim.
Unauthorized access to such a system raises concerns because of what the model is capable of doing. In Anthropic's own words, Mythos can identify and exploit vulnerabilities "in every major operating system and every major web browser when directed by a user to do so."
In simple terms, the model can scan software for security flaws. In theory, that capability could help organizations defend themselves or allow attackers to locate weaknesses in their systems. That dual-use potential is a key reason Anthropic restricted the release.
The company reportedly shared access to Mythos with a small number of organizations, including companies such as Apple, Amazon, and Cisco Systems, allowing them to test their own infrastructure for vulnerabilities before a wider rollout.
According to Bloomberg, the group responsible for the alleged unauthorized access is part of a private Discord community that searches for information about unreleased AI models. Members reportedly use bots and other tools to scan sites such as GitHub for technical clues.
One individual in the group is said to have had contractor-level access to a third-party vendor environment used by Anthropic. That access reportedly helped the group get closer to the Mythos system.
The method used to locate the model appears to have been surprisingly simple. The group allegedly made "an educated guess" about the model's online location based on knowledge of the naming patterns Anthropic uses for its systems. Some of those technical details were reportedly exposed in a recent data breach involving Mercor, a company that works with several AI developers.
Responding to the report, Anthropic said it is investigating the situation.
"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement. Anthropic added that it currently has no evidence the reported access went beyond the vendor environment or affected its internal systems.
According to Bloomberg's source, the group did not use the model to attempt cyberattacks. Instead, they reportedly ran simple tests, such as asking the model to build basic websites.