
Banks raise alarm over Anthropic's Mythos AI: Could it exploit financial system weaknesses?
Regulators from Australia and South Korea have raised concerns regarding Anthropic's AI model Mythos, arguing that it has the potential to destabilize entire banking systems.
"ASIC is closely monitoring these developments along with peer regulators to assess possible implications for the Australian market," a spokesperson for the Australian Securities and Investments Commission (ASIC) told Reuters on Monday.
The competition watchdog adds that it's engaged in talks with regulators, government agencies, and the financial sector to "understand and respond to changing technologies."
The agency emphasized that its top priority is to safeguard customers and clients against risks posed by advanced AI systems.
The Australian Prudential Regulation Authority (APRA), the country's banking regulator, echoes ASIC's concerns, stating it will "continue to assess the implications of technological advancements to ensure the ongoing safety and resilience of the financial system."
South Korea's Financial Supervisory Service (FSS) and the Financial Services Commission (FSC) told Reuters they had convened with banks and insurance companies to review Mythos-related risks.
Ever since Anthropic announced its AI-powered bug detection tool, there have been growing concerns that it could be weaponized to detect and exploit software vulnerabilities. For that very reason, Mythos won't be released to the public.
Bug exploitation in the financial sector could be catastrophic, experts say. For example, Kolja Gabriel, a member of the executive board at the German Banking Association, recently announced that German banks, financial watchdog BaFin, and other national authorities are examining the potential risks of AI tools like Mythos.
Last week, the National Cyber Security Center (NCSC) in the Netherlands said that Mythos doesn't just detect vulnerabilities; it also uses them in conjunction to construct chain attacks.
"This increases the risk that small, seemingly harmless bugs could, when combined, enable a serious attack. At the same time, there is a lack of public technical details to verify the full impact - it is plausible that real vulnerabilities are being exploited, but it is less clear how easily they can be exploited in practice," the agency explained.
According to Canadian news outlet The Globe and Mail, the Canadian Financial Sector Resiliency Group (CFRG), the Department of Finance, the Office of the Superintendent of Financial Institutions (OSFI), and executives from Canada's six biggest banks recently gathered to discuss AI-related cybersecurity risks.