
Anthropic has launched Project Glasswing, a new initiative that brings together some of the world's largest technology companies to test how advanced Artificial Intelligence (AI) can be used to secure critical software systems. The move reflects a growing urgency across the industry: AI is not just changing how software is built, it is also reshaping how vulnerabilities are discovered and exploited.
For years, identifying serious software flaws required deep expertise and time. Many vulnerabilities went unnoticed for decades. That constraint is now breaking. Anthropic says its frontier model, Claude Mythos Preview, can identify high-risk vulnerabilities at a scale that rivals, and in some cases exceeds, skilled human researchers. More importantly, it can do so faster and with minimal human input.
This shift has two sides. The same capability that strengthens defence could also accelerate attacks. The gap between a vulnerability being discovered and exploited is shrinking, forcing organisations to rethink how they approach security.
Unlike typical AI launches, Anthropic is not releasing this model publicly. Instead, access is limited to a closed group of partners, including Amazon Web Services, Microsoft, Google, Cisco, and Palo Alto Networks. These organisations are using the model to test real-world defensive use cases across critical infrastructure.
The focus areas are practical and immediate: scanning large codebases, identifying hidden vulnerabilities, testing system resilience, and improving how software is secured before deployment.
Anthropic has committed up to USD 100 million in usage credits, along with funding support for open-source security efforts, signalling that this is intended as a long-term ecosystem play rather than a short-term experiment.
Initial findings point to a clear step forward. The model has already uncovered thousands of high-severity vulnerabilities across operating systems, web browsers, and widely used software components. Some of these flaws had remained undetected despite years of testing.
In several cases, the AI was also able to map how vulnerabilities could be exploited, mirroring real-world attack patterns. For security teams, this moves testing closer to how adversaries actually operate. The implication is straightforward: security can no longer rely on periodic audits. It needs to become continuous, adaptive, and AI-driven.
Participants in the initiative are framing this as more than just a new tool.
Anthony Grieco, SVP & Chief Security & Trust Officer, Cisco, said, "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats."
Amy Herzog, Vice President and CISO, Amazon Web Services, added: "We've been testing Claude Mythos Preview in our own security operations... where it's already helping us strengthen our code."
Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research, Microsoft, pointed to scale: "The opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented."
The common thread is clear: security is moving from being reactive to predictive.
A significant part of Project Glasswing focuses on open-source software, which underpins much of today's digital infrastructure. These systems are widely used but often under-resourced from a security standpoint. By extending AI capabilities to maintainers and contributors, the initiative aims to close long-standing gaps in vulnerability detection and response. If successful, this could make advanced security tooling accessible beyond large enterprises, reshaping how software ecosystems are protected.
This is less about adopting another tool and more about rethinking security architecture for an AI-driven environment. Project Glasswing is still in its early stages, but its direction is clear. Anthropic plans to expand participation, share findings with the broader ecosystem, and work with industry and public-sector stakeholders to evolve cybersecurity practices. This includes areas such as vulnerability disclosure, automated patching, and secure-by-design development.
What stands out is the timing. AI capabilities are advancing rapidly, while existing security frameworks are still catching up. In that context, Project Glasswing is not just an experiment; it is an early attempt to ensure that defenders do not fall behind.