Anthropic Claude Mythos Preview | CrowdStrike
Company Updates

Anthropic Claude Mythos Preview | CrowdStrike

CrowdStrike.com15d ago

The Claude Mythos Preview matters for every enterprise. Frontier models raise the ceiling for both offense and defense. Our job is to make sure defenders hold the advantage. That is what we have always done. That is what we do today.

Today, CrowdStrike is a founding member of Project Glasswing. Anthropic builds the model. CrowdStrike secures AI where it executes. That's the division of labor the industry needs.

CrowdStrike evaluated the security implications of this model and brings something no other coalition member has: sensor-level visibility across every endpoint in the enterprise. A trillion events a day. 280+ tracked adversary groups. 1,800+ AI applications already discovered across customer environments. This data is what makes AI governance enforceable.

CrowdStrike's assessment of Mythos Preview confirms Frontier AI capabilities compound when paired with real-world threat intelligence, enterprise-scale visibility, and machine-speed enforcement.

Frontier AI is not a single product. It is a new category of enterprise infrastructure.

Claude Code is changing how developers build software. AI agents are reshaping how enterprises automate operations. Anthropic's Mythos Preview expands the reasoning, planning, and execution capabilities of AI agents. They all touch the endpoint -- where data is accessed, decisions are made, value is delivered, and risk is born.

New models are also where opportunity is the largest. The same frontier models that expand the attack surface give defenders a capability advantage that did not exist a year ago: discovering vulnerabilities, detecting threats, and responding to incidents faster than ever before.

Adversaries will continue to look to use the same capabilities for malicious purposes. CrowdStrike's 2026 Global Threat Report found an 89% increase in attacks by adversaries using AI year-over-year. The use of AI for vulnerability discovery and exploit development is accelerating on both sides.

Model safety is the builder's responsibility. Deployment governance is ours.

Anthropic develops frontier models under its Responsible Scaling Policy, evaluating capabilities before release and red-teaming for dangerous behaviors.

This work addresses what the model can do. It does not address what happens when the model runs inside an enterprise with access to customer data, financial systems, and thousands of users deploying the model without governance. When an AI agent connects to a CRM, queries a database, or triggers a workflow, it's not a model safety question. That is a deployment governance question.

CrowdStrike secures AI where it executes. Discovery of every AI agent in the environment. Visibility into what those agents access and what they do. Protection of sensitive data flowing through AI workflows. Runtime protection for AI agents connecting to enterprise systems.

A frontier model is the engine. Data is the fuel. The platform is how you operationalize it.

CrowdStrike's role in this coalition is grounded in capabilities no other member has:

Originally published by CrowdStrike.com

Read original source →
Anthropic