
The capabilities of the Mythos model have triggered urgent warnings within the global financial sector.
Anthropic has announced it will not release its new artificial intelligence model, Mythos, to the general public, citing concerns that the system's capabilities could facilitate catastrophic cyberattacks. The company stated the model is too dangerous to release due to its proficiency in identifying and exploiting software vulnerabilities.
Instead of a public launch, Anthropic is limiting access to a small group of major technology companies. According to reporting from Fortune, the company intends to provide these partners, whose software serves as the foundation for various digital services, with early access to give defenders time to strengthen their systems. This effort is associated with Project Glasswing, an initiative focused on securing critical software for the AI era.
The capabilities of the Mythos model have triggered urgent warnings within the global financial sector. Federal Reserve Chair Jerome Powell and U.S. Treasury Secretary Scott Bessent have warned bank CEOs about the specific security risks posed by the model, according to Bloomberg.
The Bank of England has also raised alarms regarding the threat from the AI system that Anthropic deemed too dangerous for public release, as reported by The Telegraph. These warnings come as policymakers and cybersecurity professionals express concern over the potential for AI-driven exploits to target critical financial infrastructure.
Anthropic is not the only AI laboratory developing systems with these capabilities. Axios reports that OpenAI is preparing a new model, internally referred to as Spud, which could match Mythos in its cybersecurity capabilities.
OpenAI is also working on an advanced system specifically focused on cybersecurity. The company plans to implement a phased rollout of this system to a small group of partners to ensure that cyber defenders have a head start over potential attackers.
While some analysts suggest these limited release strategies may be intended to create marketing hype around new models, most cybersecurity experts agree that AI-driven capabilities have reached a dangerous tipping point.
Experts warn that the threat is not limited to unreleased models. Fortune reports that existing, publicly available AI models are already capable of executing sophisticated cyberattacks within minutes.
AI systems are increasingly automating or semi-automating tasks that previously required advanced human expertise, including:
This automation allows attackers who lack high-level technical skills to launch large-scale operations that were previously impossible for non-experts.
The current caution follows previous security incidents involving Anthropic's technology. Reuters reports that in 2025, hackers exploited vulnerabilities in the Claude AI to attack approximately 30 global organizations.
These developments have led AI and cybersecurity professionals to raise concerns during the week of April 6, 2026, regarding the emergence of new national security risks associated with large language models that possess advanced offensive cyber capabilities.