A group of unauthorised users reportedly gained access to Anthropic's new product, which the artificial intelligence company says is too powerful to release to the public as it "poses unprecedented cybersecurity risks".
Anthropic's new AI technology, Mythos, is designed for enterprise security and is being tested by a few technology and cybersecurity firms.
A "private online forum" has managed to gain access to Mythos through a third-party vendor, according to Bloomberg.
"We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told Euronews Next.
There is currently no evidence that Anthropic's systems are impacted, nor that the reported activity extended beyond the third-party vendor environment, the company added.
Members of the unauthorised group are part of a Discord channel that seeks out information about unreleased AI models, Bloomberg reported.
Citing a person employed by a third-party contractor that works for Anthropic, Bloomberg added that the group tried several strategies to gain access to the model.
The outlet also reported that the unauthorised group had been regularly using Mythos once it gained access.
Anthropic said it would limit the release of its new AI model to a few tech and cybersecurity firms as part of its so-called Project Glasswing. The list includes Amazon, Apple and JP Morgan Chase.
Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports.
Treasury Secretary Scott Bessent convened a meeting of senior American bankers in Washington in April to discuss the Mythos model. The meeting encouraged the banking executive to use Antropic's Mythos model to detect vulnerabilities, according to Bloomberg.
Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports.