
Anthropic released its new AI cybersecurity model to a select group of organisations.
AI firm Anthropic is reportedly investigating claims of unauthorised access to its Claude Mythos model that has yet to be released to the public.
Anthropic said: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments."
Bloomberg previously reported that members of a private forum had accessed with model which is currently only released to a small number of AI firms.
Despite this report, Anthropic said there is no evidence that its systems have been affected nor is there evidence that those with access to the AI model are malicious.
However, the report does spark concerns around how AI firms are able to keep their models from being accessed by the wrong hands, and the resulting fallout.
Anthropic released the Mythos model preview to a select group of AI and financial firms so they could test their defence systems against the models superior ability to find and exploit vulnerabilities.
Keeping the model secure and out of the hands of the public or threat actors would require these organisations to also control access to the model.
According to Bloomberg's report, one of the unauthorised users with claiming access to the model already had permission to view the model through the work they did for a third-party vendor.
The group had been using the model, though not for hacking, as they wanted to evade detection, according to Bloomberg.
Anthropic's Mythos has sent shockwaves through the cybersecurity and AI sphere when Anthropic shared is ability to find and exploit security vulnerabilities in seemingly secure code.
Its tapered release was meant to allow financial firms and other AI companies to test their code and systems for vulnerabilities, as Anthropic has warned the model may have too much potential for malicious acts if accessible by the public.
The model's release also sparked interest from world leaders, as the UK government sought to work with Anthropic, and even the US has reversed its opinion on the AI firm, looking to collaborate due to the new model.