
By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement.
The project is based on Claude Mythos Preview, which is only available to partners via Project Glasswing.
"Launch partners will use Mythos Preview for defensive security work and share what they learn so the whole industry can benefit," Anthropic said in an April 8 announcement.
"Access has also been extended to around 40 additional organisations that build or maintain critical software infrastructure, so they can scan and secure both first-party and open-source systems. Anthropic is committing up to $100M in usage credits across these efforts, as well as $4M in direct donations to open-source security organisations."
CrowdStrike, one of the key partners in the project, highlighted the importance of the effort given that adversaries are also finding vulnerabilities faster.
"The window between a vulnerability being discovered and being exploited by an adversary has collapsed - what once took months now happens in minutes with AI," Elia Zaitsev, Chief Technology Officer at CrowdStrike, said.
"Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it's a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one."
George Kurtz, President and CEO of CrowdStrike, added that as AI becomes more capable, the more security it requires.
"That's why Anthropic chose CrowdStrike as a founding member of their security coalition for Claude Mythos Preview - a technical partnership," Kurtz said.
"Falcon secures AI where it executes. AI is creating the largest security demand driver since enterprises moved to the cloud. Claude Code is changing how people use computers. OpenClaw is set to reshape how enterprises automate. Mythos may be the most capable frontier model yet. It won't be the last. All of these AI innovations meet enterprises at the endpoint. That's where they access data, make decisions, and also create risk."
Jim Zemlin, CEO of the Linux Foundation, said that Project Glasswing is particularly relevant to open source projects.
"Open source maintainers - whose software underpins much of the world's critical infrastructure - have historically been left to figure out security on their own. Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software," Zemlin said.
"By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams."
However, while the project's partners praise the initiative, others have pointed out that such technologies cut both ways.
"What stands out isn't the specific Anthropic release, but what it signals. As LLMs get better at reasoning over code, vulnerability discovery stops being a purely human advantage," Erik Bloch, Vice President, Information Security at Illumio, told Cyber Daily.
"LLMs are fundamentally language engines, and code is just another language. That's why it's not surprising they can find bugs and vulnerabilities that humans or rule‑based tools miss, especially subtle, logic‑level issues. The challenge is that this cuts both ways. Attackers can use the same models to identify weaknesses to exploit, or even to introduce intentionally hard-to-spot vulnerabilities.
"From that perspective, limiting exposure makes sense. Attackers will use tools like this the moment they can, which is why vendors will want to use them first, to find and fix issues before someone else does."