
"It's certainly not something that's causing panic or setting off any alarm bells on our end right now, but it's definitely something we need to keep in mind...
European banks are maintaining close contact with financial regulators regarding the potential risks posed by Anthropic's new AI model, Mythos, citing concerns over its advanced coding capabilities and implications for cybersecurity in the financial sector.
Christian Sewing, president of the German Banking Association and chief executive officer of Deutsche Bank, confirmed on April 20, 2025, that banks are actively discussing the model with regulators as part of ongoing risk management efforts. He emphasized that while the situation does not currently warrant alarm, the technology's ability to identify software vulnerabilities at scale necessitates careful monitoring.
"It's certainly not something that's causing panic or setting off any alarm bells on our end right now, but it's definitely something we need to keep in mind in our day-to-day risk management -- and that's exactly what we're doing," Sewing told journalists following internal discussions last week and a scheduled follow-up meeting later on Monday.
The heightened attention stems from Mythos's demonstrated capacity to generate and analyze code at a level that experts say could significantly accelerate the discovery of cybersecurity weaknesses in software systems. While Anthropic has positioned Mythos as a tool for improving software reliability and security through advanced reasoning, financial institutions and regulators are assessing whether such capabilities could also be misused to exploit vulnerabilities in banking infrastructure.
European banking supervisors, including the European Central Bank and national authorities in Germany, France, and the Netherlands, have increased scrutiny of generative AI models deployed or tested within financial services, particularly those with strong reasoning and code-generation functions. The concerns are part of a broader regulatory push to ensure that AI systems used in or interfacing with critical financial infrastructure meet stringent safety, transparency, and resilience standards.
Banks are responding by strengthening internal AI governance frameworks, conducting specialized risk assessments on third-party AI tools, and enhancing collaboration with regulators to share insights on emerging threats. Rather than imposing blanket restrictions, financial institutions are focusing on understanding the specific use cases and limitations of models like Mythos within controlled environments.
Sewing noted that the German Banking Association plans to continue these discussions beyond the initial talks, aiming to develop coordinated guidance for member banks on evaluating AI-related operational and security risks. The association has previously issued advisories on generative AI use in areas such as customer service automation, fraud detection, and internal software development, stressing the need for human oversight and rigorous testing.
Anthropic, the AI safety-focused company behind the Claude series of models, introduced Mythos as a research-oriented system designed to excel in complex reasoning, software engineering, and strategic planning tasks. While not yet released for broad commercial use, Mythos has drawn attention in technical circles for its performance on coding benchmarks and its ability to identify logical flaws in codebases.
The model's capabilities have reignited debates about the dual-use nature of advanced AI systems -- tools intended to improve security could also be repurposed to uncover weaknesses for malicious ends. Regulators and industry groups are increasingly calling for transparency around training data, model behavior, and safeguards, particularly when such systems are accessed by or integrated into financial technology workflows.
As of April 2025, no formal restrictions have been placed on Mythos by European financial authorities, but ongoing dialogue suggests that supervisory expectations around AI risk management in banking are evolving rapidly. Banks are expected to incorporate these developments into their internal control frameworks, with particular attention to third-party AI dependencies and cyber resilience testing.