
A powerful new artificial intelligence model developed by Anthropic has prompted a coordinated response from leading global financial regulators, with authorities in the United States, Canada and the United Kingdom moving rapidly to assess risks and strengthen bank defences against emerging cyber threats.
The urgency of the situation was underscored in Washington this week, where US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened senior Wall Street executives for a hastily arranged meeting, warning that Anthropic's latest model, Mythos, could usher in a new phase of more sophisticated cyberattacks. Officials encouraged banks to take the unusual step of deploying the same technology internally to identify potential vulnerabilities in their own systems, as per people familiar with the discussions.
Major financial institutions including JPMorgan Chase, Goldman Sachs and Citigroup are among those testing the model or preparing to do so, according to a Bloomberg report.
The warning in the United States quickly resonated across other jurisdictions. In Canada, the Bank of Canada convened major lenders and financial authorities under its Financial Sector Resiliency Group to examine the cybersecurity implications of advanced AI systems. Officials emphasised the need for information sharing and preparedness while stating that there was no immediate or active threat to the financial system.
In the United Kingdom, the Bank of England is preparing to address the issue in upcoming meetings with banks, regulators and cyber agencies, placing Anthropic's model on the agenda of key resilience and artificial intelligence taskforce discussions.
The near-simultaneous actions across major economies signal a growing consensus among policymakers that rapid advances in artificial intelligence could enable a new class of cyberattacks capable of exploiting multiple vulnerabilities simultaneously, a challenge that has traditionally stretched even highly skilled human hackers.
Anthropic has stated that its Mythos model is capable of identifying and exploiting weaknesses across major operating systems and web browsers, prompting the company to limit its release to a small group of organisations under a controlled testing programme. The initiative, involving select technology firms and at least one major bank, is aimed at identifying potential flaws and establishing safeguards before a broader rollout.
Regulators are increasingly viewing such capabilities as a potential systemic risk, particularly as many of the banks involved in the US discussions are designated as systemically important institutions, meaning disruptions to their operations could have wider implications for the global financial system.
The episode also reflects a shift in regulatory strategy, with authorities moving beyond oversight of AI deployment to actively encouraging financial institutions to adopt advanced technologies to defend against evolving threats. While officials across jurisdictions have described the discussions as precautionary, the speed and coordination of the response underscore the seriousness with which regulators are approaching the potential for AI to reshape the global cyber risk landscape.