
Anthropic is in active talks with the UK government over rolling out its powerful Claude Mythos model to British businesses eager for access to technology that has raised alarm for its ability to expose cyber security vulnerabilities.
Officials are discussing an expanded rollout of the powerful AI model by the San Francisco-based company to key UK-based organisations, according to people familiar with the matter.
Banks and financial institutions are among those seeking to get expedited access to the technology to help strengthen their cyber security, but Anthropic told the FT it would not commit to a timeline on this.
In light of the risks posed by the model, Anthropic has been rolling Mythos out gradually, first to a select group of 40 organisations that are almost exclusively American. This includes Amazon and Microsoft as well as large banks such as JPMorgan Chase and Morgan Stanley.
One executive at a large UK company said they had been discussing the specific vulnerabilities that Mythos exposed with American companies that have access to the model.
They added that groups were in discussions with Microsoft about securing "patches" -- technical fixes that close off vulnerabilities found by Mythos. This has allowed companies to begin bolstering their software before they have access to the technology.
JPMorgan Chase chief Jamie Dimon has privately warned about the importance of Mythos for the banking sector and the risks that it raises.
He cautioned a UK banking executive that organisations should tread carefully with Anthropic's new tool and advised it should be deployed in co-ordination with the government, according to people familiar with the conversation.
Similar warnings have been made by regulators, central bankers and cyber security leaders around the world. On Tuesday, German central bank chief Joachim Nagel called for all institutions to have access to Mythos to ensure a level playing field and avoid misuse.
The Bundesbank head said in a speech: "This AI model seems to be a double-edged sword, since it could be used not only to improve digital security systems, but also to leverage their vulnerabilities for malicious purposes."
Anthropic unveiled its latest model earlier this month and touted its ability to detect cyber security flaws faster than humans, a feature that has stoked fears worldwide it will generate "exploits" that bad actors can use to take advantage of them.
Earlier this week, Anthropic said it was investigating reports that a group of users gained unauthorised access to Mythos through third parties.
The UK is the only known government outside the US to have accessed a preview of the model, via its AI Security Institute, a research body that tests frontier AI systems before release.
The institute has said that Mythos represents a step up over previous AI capabilities and is able to exploit vulnerabilities "that would take human professionals days of work".
Technology minister Liz Kendall and security minister Dan Jarvis warned in a joint public letter last week that "AI cyber capabilities are accelerating even faster than had been previously envisaged" in light of Mythos and urged businesses to fortify their cyber defences.
UK lenders also raised the topic with chancellor Rachel Reeves at a roundtable catch-up on Wednesday. Leading British banks, insurers, exchanges and regulators met this week to discuss the cyber security risks of Mythos and AI models more generally.
The cross-market operational resilience group, which is co-chaired by the Bank of England and the UK Finance banking trade association, said the meeting had "reflected on the challenges these models present from a cyber security perspective".
It said companies had agreed to use AI "to strengthen cyber defence, for example through ongoing efforts to reduce the attack surface available; improving threat detection and investigation; and further exploration into automating mitigation and response measures".
The Department for Science, Innovation and Technology did not immediately respond to requests for comment.
Additional reporting by Martin Arnold in London