OpenAI unveils restricted cybersecurity AI model after Anthropic Mythos -- Who gets access?
Market Updates

OpenAI unveils restricted cybersecurity AI model after Anthropic Mythos -- Who gets access?

The News International10d ago

Anthropic Claude Mythos flagged as powerful AI model due to unprecedented cyber capabilities

OpenAI has recently unveiled its latest cybersecurity model, following the release of Anthropic's powerful Claude Mythos model, capable of identifying the critical vulnerabilities in operating systems and web browsers.

Like Claude Mythos, OpenAI's newly-released model will be available to a limited number of partners with an aim to prevent the misuse of powerful AI models.

Named GPT‑5.4‑Cyber, the model is specifically designed for defensive security tasks. Equipped with permissive nature, the model possesses a "lower refusal boundary", highlighting that it will not block sensitive security requests that general models might flag as risky.

When it comes to the features, the programme consists of "thousands of verified individual defenders and hundreds of teams responsible for defending critical software."

It includes new features like binary reverse engineering, allowing defenders to analyze compiled software for malware and vulnerabilities without needing the original source code.

OpenAI's model will be available to "the highest tiers" of people and organizations, including vetted security vendors, organizations, and researchers in its Trusted Access for Cyber (TAC) scheme.

Higher-tier access may require users to waive "Zero-Data Retention" (ZDR) so OpenAI can maintain visibility into how the model is being used.

"This is particularly true for developers and organizations accessing our models through third-party platforms where OpenAI may have less direct visibility into the user, the environment, or the purpose of the request," OpenAI wrote in its blogpost.

The launch of GPT‑5.4‑Cyber model is based on certain guiding principles. The first principle is based on democratization access, making tools available to legitimate actors of all sizes through objective criteria.

OpenAI also aims to build ecosystem resilience by supporting the broader community through grants, such as $10 million program and open-source initiatives like Codex Security.

OpenAI classifies GPT‑5.4 as a "high" cyber capability model under its Preparedness Framework.

"We believe the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models. We expect versions of these safeguards to be sufficient for upcoming more powerful models," OpenAI wrote.

Originally published by The News International

Read original source →
Anthropic