
As part of the risk assessment process, regulators plan to brief representatives from major British banks, insurers, and exchanges.
UK financial regulators are conducting urgent assessments of the systemic and cyber risks associated with the latest artificial intelligence model from Anthropic, known as Claude Mythos Preview. According to reporting from the Financial Times on April 12, 2026, officials are coordinating with the government's primary cyber security agency and major financial institutions to evaluate the model's potential impact on critical infrastructure.
The regulatory response involves a collaboration between the Bank of England, the Financial Conduct Authority (FCA) and HM Treasury. These bodies are currently in discussions with the National Cyber Security Centre (NCSC) to investigate potential vulnerabilities within critical IT systems that have been highlighted by the capabilities of the Claude Mythos Preview model.
As part of the risk assessment process, regulators plan to brief representatives from major British banks, insurers, and exchanges. This meeting, expected to take place within the next fortnight from April 12, 2026, will focus on the specific cyber security risks posed by the new AI model. The briefing follows high-level concerns regarding the unprecedented capabilities of the software.
The Bank of England has declined to comment on the matter, while the Treasury, NCSC, and FCA were not immediately available for comment. Anthropic did not respond to requests for comment regarding the regulatory scrutiny.
The UK's actions follow similar concerns in the United States. On April 10, 2026, U.S. Treasury Secretary Scott Bessent convened a meeting with major Wall Street banks to discuss the potential cyber risks associated with the model.
Anthropic has deployed the model as part of an initiative called Project Glasswing. This is a controlled program that permits a limited number of selected organizations to utilize the unreleased Claude Mythos Preview model specifically for defensive cyber security purposes.
The urgency among regulators stems in part from the model's demonstrated ability to identify software flaws. In a blog post published in April 2026, Anthropic stated that the model had already identified thousands of major vulnerabilities across various operating systems, web browsers, and other widely used software applications.
The ability of the AI to pinpoint these vulnerabilities at scale has prompted the current rush by UK authorities to determine if existing financial IT systems are sufficiently protected against potential exploits that could be derived from such capabilities.