Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One
Company Updates

Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

WebProNews21h ago

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion.

Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered.

And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood.

Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks.

This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers?

Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech.

So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

Originally published by WebProNews

Read original source →
AnthropicCHAOSDiscordMercor