News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic Mythos Develops Into Insignificant Outcome * The Register

Anthropic's Mythos model is designed to discover software vulnerabilities, yet its release has stirred concern. Initially introduced under the Project Glasswing initiative, the model was restricted to select organizations for vulnerability assessment. Recent developments, however, reveal that unauthorized access to Mythos occurred, heightening cybersecurity concerns. Unauthorized Access Incident On a Wednesday, an Anthropic representative confirmed that individuals outside the Glasswing partners might have accessed the Mythos model. This access was not through Anthropic's authorized production API. The spokesperson stated, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The third-party vendor, linked to Anthropic's model development, has not been publicly identified. According to Bloomberg, a small group exploited their knowledge of the model's online location, derived from prior leaks, to gain access. Mercor Data Breach This unauthorized access coincided with a data breach at Mercor, an AI staffing firm that supplies contractors to major AI labs. Earlier in the month, Mercor acknowledged being affected by the LiteLLM supply-chain attack. Reports suggested that the intruders, identified as members of a private Discord channel, began accessing Mythos the same day Anthropic announced Project Glasswing. Mythos' Capabilities and Limitations Despite its marketing hype, early user feedback about Mythos indicates limitations. While organizations like AWS and Mozilla have praised its speed in identifying vulnerabilities, it has not outperformed elite human cybersecurity researchers. Mozilla's CTO, Bobby Holley, disclosed that Mythos found 271 vulnerabilities in Firefox but acknowledged that any vulnerabilities it discovered could also have been identified by skilled human researchers. Claims of Overhype Researchers have raised concerns about the veracity of the claims surrounding Mythos. While Anthropic touted its ability to discover "thousands of high- and critical-severity vulnerabilities," critics argue these numbers are exaggerated. For instance, VulnCheck researcher Patrick Garrity estimated the actual count at around 40, and no confirmed zero-day exploits were documented. Claims regarding 181 Firefox vulnerabilities were also scrutinized, revealing that most findings stemmed from environments without standard security measures. Concerns in the Cybersecurity Community Experts have mixed reactions about unauthorized access to Mythos. Snehal Antani, CEO of Horizon3.ai, stated the security community should not overreact. He emphasized that adversaries do not require Mytos for vulnerability research; existing open-source models already facilitate this process. * Unauthorized Access: Occurred via a third-party vendor. * Vulnerability Discovery: Mythos' findings are comparable to skilled human researchers. * Hype vs. Reality: Reports indicate exaggerated claims of Mythos' capabilities. The incident surrounding Anthropic's Mythos model illustrates the challenges of maintaining security and managing expectations in the rapidly evolving AI landscape. As the investigation continues, the cybersecurity community watches closely, evaluating the model's true potential and implications.

AnthropicMercorDiscord
El-Balad.com45m ago
Read update
Anthropic Mythos Develops Into Insignificant Outcome * The Register

Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

DiscordAnthropicMercor
WebProNews1h ago
Read update
Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.

AnthropicMercorDiscord
Fortune1h ago
Read update
A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

AI Startup Mercor Faces Lawsuit Over Data Breach | PYMNTS.com

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The $10 billion company, which has worked with the likes of Meta, has been served with at least seven class-action lawsuits in the wake of the breach, The Wall Street Journal (WSJ) reported Thursday (April 23). The suits allege the breach exposed Mercor contractor information that included job interview recordings, facial biometric data and screenshots of employees' computers. One suit, the report added, claims Mercor collected applicant-vetting data, such as background checks, which it shared with partners, in violation of federal regulations. According to plaintiffs, the company's practices include monitoring its contractors' computers and sharing that data with clients, using recorded candidate interviews to train AI models, and training client models on materials potentially owned by other companies. "We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," Mercor said in a statement to the WSJ. "We take the privacy of our customers, contractors, employees and those we interview very seriously, and we comply with all relevant laws and regulations," the statement added, noting that the startup acted quickly to remedy the breach, which affected several other companies. "We are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings," it said. The WSJ report added a comment from a Meta spokesperson that the company has paused its work with Mercor and is investigating the breach. PYMNTS wrote earlier this week about the "new consensus" being formed around the "data problem" beneath the race to deploy agentic AI. "More autonomous AI systems will raise the stakes for how data is created, governed, accessed and protected," that report said. "Synthetic data needs clearer standards. Real-world data needs tighter minimization. And the systems tying it all together need a stronger foundation of trust, security and control." Also this week, PYMNTS examined the changing cybersecurity landscape, arguing that while few of this year's high-profile incidents can be called "AI attacks," it is still hard to ignore the corresponding uptick in AI-powered offensive capability. "Anthropic's Claude Mythos Preview, for example, has reportedly demonstrated the ability to autonomously discover and exploit vulnerabilities across major operating systems and web browsers, including decades-old bugs in widely trusted systems," PYMNTS wrote.

AnthropicMercor
PYMNTS.com2h ago
Read update
AI Startup Mercor Faces Lawsuit Over Data Breach | PYMNTS.com

Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Project Glasswing In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." A Low-Tech Breach The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

DiscordAnthropicMercor
Signs Of The TImes2h ago
Read update
Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

AI recruiting startup Mercor hit with at least seven class-action lawsuits after hacking: What the company has to say

Mercor, a Silicon Valley startup valued at $10 billion, is facing a wave of legal trouble after a massive data breach exposed the private information of thousands of its contractors, a report has said. According to The Wall Street Journal, at least seven class-action lawsuits have been filed against the company in recent weeks after the company confirmed a third-party data breach.Mercor hires contractors to provide feedback that helps train artificial intelligence (AI) models for tech giants like OpenAI, Anthropic and Meta. However, a breach involving a third-party partner has reportedly leaked everything from recorded job interviews to facial scans and even screenshots of workers' private computer screens.The lawsuits, including one filed on Tuesday (April 22) in Northern California, do more than just complain about the hack; they offer a rare look at the aggressive tactics the company allegedly uses to gather data. According to the legal filings, plaintiffs claim Mercor engaged in several controversial practices, including tracking contractors' screens and sharing that private activity with clients; sharing background checks and applicant data with partners in ways that may violate federal regulations; using recorded video interviews of job candidates to train AI models without proper disclosure and training models on materials that might actually belong to other companies.In a public statement, the startup stood its ground, calling the lawsuits "speculative" and inaccurate. Regarding the data breach itself, the company noted that they were not the only victims."We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," Mercor said in a statement, according to the publication."We take the privacy of our customers, contractors, employees and those we interview very seriously, and we comply with all relevant laws and regulations," the company said, adding that it acted quickly to remediate the data breach. "We are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings," it said.Meta has already stopped working with Mercor and indefinitely suspended all work with the startup valued at $10 billion.

MercorAnthropic
The Times of India4h ago
Read update
AI recruiting startup Mercor hit with at least seven class-action lawsuits after hacking: What the company has to say

Workers Sue $10B AI Startup Mercor Over Alleged Data Collection and Exposure

A $10 billion AI startup that supplies training data to companies like OpenAI, Anthropic, and Meta is facing a wave of lawsuits over how it collects and handles sensitive worker data. According to a report by The Wall Street Journal, at least seven class-action lawsuits have been filed against Mercor in recent weeks following a third-party data breach that allegedly exposed contractor information. The lawsuits claim the breach included highly sensitive material, ranging from recorded job interviews to facial biometric data and screenshots of workers' computers. One class-action suit filed in Northern California claims Mercor collected extensive applicant data -- including background checks -- and shared it with partners in violation of federal regulations. Plaintiffs allege that Mercor's practices go beyond standard hiring processes. According to the suit, the company monitored contractors' computers, used recorded candidate interviews to train AI models, and may have trained client systems on materials owned by other companies. In one account cited in the lawsuit, a plaintiff alleged that workers were encouraged to use real company data in tasks, provided it was slightly altered or anonymised. When the plaintiff tried to avoid including sensitive details, reviewers reportedly pushed back, criticising the work as too vague. Another contractor alleged he encountered financial models and prompts that appeared to contain proprietary information, including what the lawsuit describes as "pre-project metadata, hidden defined names, institutional data-terminal markers, real lender or counterparty names, irregular numeric precision, and other features that raised serious provenance questions." Mercor has denied the allegations. "We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," the company said in a statement. It added that it "take[s] the privacy of our customers, contractors, employees and those we interview very seriously" and that it complies with relevant laws and regulations. The company also said it acted quickly to address the breach, noting that "we are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings." The case is drawing attention to how AI companies source the data used to train models. The Journal reports that Mercor previously attempted to buy work materials from individuals on LinkedIn, including documents those individuals did not necessarily own the rights to. Online postings also suggested the company offered payments for personal-finance files and even Google Maps histories. Workers also described a system of continuous monitoring. Contractors are required to install tracking software called Insightful, which captures screenshots of their computers during work sessions. One lawsuit alleges that this software recorded activity across hundreds of applications, including personal accounts, and that workers were not "clearly informed" of the extent of the monitoring. Mercor said it informs workers that screenshots may be taken during billing hours and instructs them to use only work-related applications while the software is active. The fallout is already affecting Mercor's relationships. Meta has paused its work with the company and is investigating the incident, according to a spokesperson cited by the Journal. Anthropic declined to comment, while OpenAI did not respond to requests. The situation highlights growing tension in the AI industry, where companies are under pressure to secure large volumes of high-quality data to train increasingly advanced models.

MercorAnthropic
Techloy5h ago
Read update
Workers Sue $10B AI Startup Mercor Over Alleged Data Collection and Exposure

Anthropic's locked-down Mythos leaks

The Rundown: Access to Anthropic's Mythos model reportedly leaked into a Discord group within days of launch, after the users reportedly guessed the company's deployment URL and naming using patterns leaked in the recent Mercor breach. The details: Why it matters: The first alleged unauthorized use of the AI model that had the White House and others calling emergency meetings didn't come from China, Russia, or another rival nation -- it came from a random Discord group. Not a great start, and the problem only compounds as partner access grows and the models get more dangerous.

DiscordMercorAnthropic
The Rundown AI6h ago
Read update
Anthropic's locked-down Mythos leaks

Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement. According to reporting by Bloomberg, a small number of people who are members of a private Discord channel dedicated to researching unreleased AI models have had unofficial access to Mythis since it was first announced. Getting in was apparently simple, too. "To access Mythos, the group of users made an educated guess about the model's online location," Bloomberg said in an article published on April 21. "They based this on knowledge about the format Anthropic has used for other models, the person said, adding that such formatting details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers." Anthropic said it was aware of the access and was investigating the report. Shane Fry, Chief Technology Officer at RunSafe Security, said it was an example of how easily exploited AI models commonly are. "Unauthorised users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed," Fry said. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Germaine Tan Shu Ting, VP Security & AI Strategy and Field CISO at Darktrace, expressed similar concerns. "It shows that the frontline remains identity," Tan Shu Ting said. "If Anthropic itself can be accessed using traditional hacking methods (reportedly coopting existing third-party access and 'internet sleuthing'), then it highlights how critical it is to assume the threat is already inside the walls." However, while analysts and industry insiders have reacted to Mythos with something like awe, the actual capabilities of the model may, in reality, fall far short of Anthropic's claims. Don't believe the hype? Doug Britton, EVP and chief strategy officer of RunSafe Security, referred to Mythos and Project Glasswing earlier in April as a "watershed moment for AI's runaway zero-day discovery and exploitation". "AI is now uncovering memory safety bugs at massive scale, including vulnerabilities that have been hiding in production code for over 25 years - the problem isn't just that these bugs exist, it's that they're being found faster than organisations can fix them," Britton said. But the question is - are they being found that fast? Davi Ottenheimer, security engineer and president of security consultancy flyingpenguin, has some serious doubts. "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results," Ottenheimer said in a blog post around the time Mythios and Glasswing were announced. "The Glasswing consortium is regulatory capture dressed up poorly as restraint." Ottenheimer based his observations - rather caustic ones, it must be said - on Anthropic's own Claude Mythos Preview System Card, a "whoppingly inefficient 244-page document that devotes just seven pages to the claim that the model is too dangerous to release". According to Ottenheimer, only seven of those pages do not mention the acronyms one might expect: CVSS, CWE or CVE. "The flagship demonstration document turns out to be like the ending of the Wizard of Oz, a sorry disappointment about a model weaponising two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defence-in-depth mitigations stripped out. Anthropic failed, and somehow the story was flipped into a warning about its success." Ottenheimer has many issues with Anthropic's - and, it must be said, the wider media's - claims that Mythos found "Thousands of zero-day vulnerabilities in every major operating system and every major web browser", and he pulls no punches. Referencing that claim, Ottenheimer points out that the word 'thousands' is "used once, in reference to transcripts reviewed during the alignment evaluation". "It is never used to describe vulnerabilities. The cyber security section (Section 3, pages 47-53) contains no count of zero-days at all," Ottenheimer said. "With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate, why are you teasing us with the claims about vulnerabilities at all?" Cyber Daily has reached out to Anthropic for comment.

DiscordMercorAnthropic
cyberdaily.au10h ago
Read update
Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

Discord group says it accessed Claude Mythos by guessing location

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

AnthropicMercorDiscord
Mashable ME12h ago
Read update
Discord group says it accessed Claude Mythos by guessing location

Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

A group of unauthorized users has reportedly gained access to Mythos, the powerful cybersecurity tool recently unveiled by Anthropic, TechCrunch reported. This development is significant because Anthropic has explicitly warned that Mythos is capable of identifying and exploiting vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The company has framed this technology as a double-edged sword. They previously noted that in the wrong hands, it could become a potent hacking tool rather than the defensive asset it was designed to be for enterprise security. The unauthorized access was reportedly achieved by a small group of users operating within a private online forum. According to reports, these individuals managed to secure access to the tool on the same day it was publicly announced by Anthropic. The group, which is part of a Discord channel dedicated to hunting for information about unreleased AI models, used a mix of strategies to bypass restrictions. Perhaps most concerning is how the group managed to pinpoint the location of the model By making an educated guess about the model's online location, they relied on their existing knowledge of the naming conventions and formats Anthropic has used for previous models. This effort was reportedly aided by information revealed in a recent data breach from Mercor, an AI training startup that works with top developers. Furthermore, the group leveraged access provided by a person who is currently employed at a third-party contractor that works for Anthropic. This individual, who was interviewed about the breach, had legitimate permission to access Anthropic models and software related to evaluating the technology for the startup, which they gained through their contract work. Anthropic has been very cautious with the distribution of Mythos. The model was released only to a select number of vendors and organizations as part of an initiative called Project Glasswing. This limited release was specifically designed to prevent the tool from falling into the hands of bad actors who might weaponize it against corporate security. Big names like Apple, Amazon, and Cisco Systems are among the organizations that have been granted access to test the model. Amazon, which is a key partner and backer of Anthropic, also offers Mythos through its Bedrock platform to a very specific, approved list of organizations. As the utility of the tool has become known, a growing number of financial institutions and government agencies on both sides of the Atlantic have been clamoring to get on that list to better safeguard their own systems. In response to the reports, an Anthropic spokesperson provided a statement, saying, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The company has been quick to clarify that, so far, it has found no evidence that this unauthorized activity has impacted Anthropic's internal systems in any way. They maintain that the access appears to be contained within a third-party vendor's environment. While the situation sounds alarming, the source who spoke about the breach offered some perspective on the intentions of the group. The individual claimed that the users involved are primarily interested in playing around with new models rather than wreaking havoc. They have reportedly avoided running cybersecurity-related prompts on the Mythos model, choosing instead to experiment with tasks like building simple websites to avoid detection. The person also noted that this group has access to a variety of other unreleased Anthropic AI models, suggesting a broader scope of interest in the company's pipeline. This incident highlights the massive challenge Anthropic faces in keeping its most powerful and potentially dangerous technology from spreading beyond its approved partners. If these reports are accurate, it raises serious questions about how many other people might be using Mythos without permission and what their true objectives might be. For now, Anthropic is left to manage the fallout of this unauthorized access, which could potentially threaten the reputation of an exclusive release intended to bolster enterprise security. It is a stark reminder that even with strict initiatives like Project Glasswing, the digital perimeter is only as strong as its weakest link, especially when third-party vendors are involved in the deployment of such high-stakes software.

MercorAnthropicDiscord
Attack of the Fanboy18h ago
Read update
Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion. Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered. And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood. Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks. This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers? Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech. So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

MercorDiscordCHAOSAnthropic
WebProNews18h ago
Read update
Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

Anthropic investigates unauthorized access to restricted Claude Mythos AI model Anthropic PBC is investigating a report that unauthorized users accessed Claude Mythos, the next-level artificial intelligence model the company says is powerful enough to enable dangerous cyberattacks. A small group of users in a private online forum gained access to Mythos on the same day Anthropic announced a limited testing release of the model, Bloomberg first reported Tuesday, citing a person familiar with the matter and documentation it had viewed. The group has been using the model regularly since, though not for cybersecurity purposes, the person said. The account was corroborated with screenshots and a live demonstration. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. The company said there is no indication the activity extended beyond the vendor or that its own systems were affected. The users reportedly gained entry through the credentials of a member of the forum who works for a third-party contractor that evaluates Anthropic models. The group combined those credentials with details from a data breach at artificial intelligence recruiting and training startup Mercor Inc. to locate the model. Bloomberg's source also claimed that the group has access to other unreleased Anthropic models. Anthropic has previously described Mythos as having a level of coding ability that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The company has restricted distribution to Project Glasswing, with a preview version that has been offered to Apple Inc., Amazon.com Inc., Cisco Systems Inc., CrowdStrike Holdings Inc., Google LLC, JPMorgan Chase & Co., Microsoft Corp. and Nvidia Corp., along with about 40 other organizations, so they can test and secure their own systems. Access to the model has also become a point of contention across the U.S. government. The National Security Agency and the Commerce Department's Center for AI Standards and Innovation already have access, according to reports and the Treasury Department is seeking it. The group using Mythos has so far avoided offensive tasks, reportedly to evade detection. Discussing the reports, Ram Varadarajan, chief executive officer at cyber deception technology company Acalvio Technologies Inc., told SiliconANGLE via email that "the Mythos breach didn't require a sophisticated attack." "It just required a contractor, a URL pattern and a Day-One guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue," explains Varadarajan. "This is the supply chain problem that perimeter-centric security has always underestimated: access controls are a policy, not an architecture and policies fail." Tim Mackey, head of software supply chain risk strategy at application security firm Black Duck Software Inc., noted that "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture the flag exercise, where success includes claims of unauthorized access to Mythos." "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind," said Mackey. "For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be only increases anxiety levels." "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI-enabled adversaries," added Mackey.

AnthropicMercor
SiliconANGLE19h ago
Read update
Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

Anthropic Mythos shaping up as nothingburger

And that unauthorized access? 'A nothing burger,' hacking startup CEO tells El Reg Anthropic's Mythos model is purportedly so good at finding vulnerabilities that the Claude-maker is afraid to make it available to the general public for fear that criminals will take advantage. But early analysis shows that Mythos may not be as dangerous as some would have you believe. Anthropic made Mythos available in preview to a select but ever-growing number of organizations under the title of Project Glasswing so they could find and fix vulnerabilities in their environment before criminals got hold of the purported zero-day machine and caused mayhem. That plan didn't quite work as intended. On Wednesday, an Anthropic spokesperson confirmed to The Register that some non-Glasswing partners may have accessed the model - but not through Anthropic's production API. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the spokesperson told us. The AI biz declined to name the third-party vendor, but said that it's a company Anthropic works with on model development. There's no evidence that unauthorized activity extended beyond the third-party vendor's environment or that Anthropic systems are affected, we're told. Bloomberg, which originally reported the unauthorized access, said that "a handful" of people gained access to Mythos by making "an educated guess about the model's online location" based on Anthropic's previous models, and that these details were revealed in the recent Mercor data breach. Mercor is an AI staffing startup that supplies specialized contractors to major AI labs, including Anthropic. Earlier this month, Mercor said that it was "one of thousands of companies" affected by the LiteLLM supply-chain attack. This group of unauthorized users reportedly belongs to a private Discord channel and gained access to Mythos on the same day that Anthropic announced Project Glasswing. Since then, it's been "playing around" with the bug-hunting machine, and doesn't have any interest in using the model for evil, according to Bloomberg. Regardless of what the group is doing with Mythos, their access illustrates a couple of key points. First: it's really hard to keep code under wraps (as also evidenced by Anthropic's earlier Claude Code source leak), especially when the folks who want to kick the tires on the new model are cybersecurity and engineering types - and they didn't even need to hack into any network or database to do it. Insider and supply-chain threats are the real deal. "The Mythos breach didn't require a sophisticated attack," Ram Varadarajan, CEO at Acalvio, a deception-tech firm, told The Register. "It just required a contractor, a URL pattern, and a day-one guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue." Additionally, considering all the hype Anthropic spun around its new model, we shouldn't be surprised the genie is out of the lamp. Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise, where success includes claims of unauthorized access to Mythos," Tim Mackey, head of risk strategy at supply chain security shop Black Duck, told The Register. That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers. "So far we've found no category or complexity of vulnerability that humans can find that this model can't," Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: "We also haven't seen any bugs that couldn't have been found by an elite human researcher." In other words, it's like adding an automated security researcher to your team. Not a zero-day machine that's too dangerous for the world. It's a nothingburger. The adversary doesn't need Mythos to hack you Anthropic, in announcing the new model, claimed Mythos identified "thousands of additional high- and critical-severity vulnerabilities." VulnCheck researcher Patrick Garrity, however, put the count as of last week at maybe 40. Or maybe none at all. Another engineer, Devansh, scoured the Mythos-related CVE advisories and Anthropic's exploit code, 44-prompt transcript, and 244-page system card, along with Glasswing partner agreements, red-team writeups. He also looked at Aisle's replication study, which tested Mythos' showcase vulnerabilities on small, cheap, open-weights models and found they produced much of the same analysis. Devansh ultimately concluded that while the bugs it found are real, the true Mythos story is "one of misinformation and hype." For example, the Anthropic-claimed 181 Firefox exploits ran with the browser sandbox turned off and the FreeBSD exploit transcript "shows substantial human guidance, not autonomy." Additionally, the "'thousands of severe vulnerabilities' extrapolates from 198 manually reviewed reports. The Linux kernel bug was found by Opus 4.6, the public model, not Mythos," Devansh said. Another researcher, Davi Ottenheimer, pointed out that the security section (Section 3, pages 47-53) of Anthropic's 244-page documentation "contains no count of zero-days at all. With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate." Ottenheimer likens it to "the ending of the Wizard of Oz, a sorry disappointment about a model weaponizing two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defense-in-depth mitigations stripped out." Snehal Antani, co-founder and CEO of offensive AI hacking company Horizon3.ai, told The Register, "attackers didn't need Mythos to accelerate vulnerability research, 4.6 and open source models have already been accelerating the vulnerability process." When asked if the security community should be concerned about unauthorized Mythos access, Antani said no. "In my honest opinion, it's a nothingburger," he told us. "The adversary doesn't need Mythos to hack you." ®

DiscordMercorAnthropic
TheRegister.com20h ago
Read update
Anthropic Mythos shaping up as nothingburger

Report: Discord Group Uses Claude's Supposedly Secret Mythos

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , The Future of AI & Cybersecurity An unauthorized group of users gained access to Claude Mythos Preview artificial intelligence model and have regularly used it since the day that AI firm Anthropic revealed the model's existence while pronouncing it too dangerous to release to the public, reports Bloomberg. See Also: Context Drives Security in Agentic AI Era Anthropic made a splash when earlier this month it said it would reserve access to a select group of companies joined together under "Project Glasswing," with the understanding that they would use the model to find and fix security vulnerabilities before hackers get access to equally powerful tech. Members include Nvidia, Apple, Amazon and Cisco (see: Anthropic Calls Its New Model Too Dangerous to Release). A source told Bloomberg the unauthorized users belong to a private Discord channel dedicated to unreleased models. An apparent member of the group told the newswire that users have not used Mythos to hunt for new exploits. Anthropic has touted the vulnerability finding properties of Mythos in a publicity campaign that has received some outside validation, as from the AI Security Institute in Great Britain, which found the model to be "a step up over previous frontier models." The source told Bloomberg the Discord group deployed a mix of tactics to access Mythos, including using access the source has as a third-party contractor for Anthropic. The group "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." The person said such data was leaked in a recent breach at AI startup Mercor (see: Mercor Breach Linked to LiteLLM Supply-Chain Attack). An Anthropic spokesperson told the newswire that it is investigating the matter but that it has no evidence of unauthorized Mythos use beyond the third party's IT environment. The source told Bloomberg the Discord group also has access to other unreleased Anthropic models. Anthropic's release of Mythos to select partners received a rejoinder from rival firm OpenAI, which days later released GPT‑5.4‑Cyber with the stated intention of making it "as widely available as possible." OpenAI said it will rely on user identity verification and "trust signals" to safeguard its vulnerability-seeking AI model from being put to bad uses (see: OpenAI Touts Wider Access to Its New Cyber Model).

DiscordMercorAnthropic
DataBreachToday22h ago
Read update
Report: Discord Group Uses Claude's Supposedly Secret Mythos

Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia

Suswati Basu is a multilingual, award-winning editor. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award.... Anthropic is investigating reports that Claude Mythos Preview, an unreleased version of its AI model, may have been accessed without authorization through a third-party vendor environment tied to development work. Speaking to Techopedia, an Anthropic spokesperson said: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." Only a week after releasing the Claude Mythos Preview to a select group of organizations, people familiar with the matter said the reported activity appears linked to an external development platform rather than Anthropic's production API systems. The sources added there is no evidence at this stage that the incident extended beyond that external environment or affected the company's internal infrastructure. Anthropic's Mythos Model Is Already Being Put To The Test Anthropic has not said whether any data was removed or when the alleged access took place, however, Bloomberg reported that it took place on a "private online forum." The users were reportedly part of a private Discord group focused on uncovering details about unreleased AI models, using bots to scan unsecured websites such as GitHub for stray references posted by major labs. The news outlet reported that the group gained access to Mythos after some members made an educated guess about the model's online location using naming patterns Anthropic had used for earlier releases. Some of those clues allegedly emerged from a recent data breach involving Mercor, a startup that works with several leading AI developers. The irony is hard to miss. Anthropic positioned Mythos as a model so powerful that it required an unusually cautious rollout. It limited access to a small number of trusted partners because of fears it could be misused by hackers or destabilize cybersecurity defenses. Yet the first major controversy surrounding the system is not what Mythos itself might do. It's the possibility that third parties have already gained access through the carelessness of an external development partner. For a company that has built much of its identity around AI safety and controlled deployment, this risks reinforcing a familiar lesson in tech: A system is only as strong as the weakest link in its wider supply chain. Quite often, that weak link is basic human nature. Also in Tech News Tim Cook Steps Down As Apple Addresses Its AI Problem After more than a decade at the helm, Apple's head honcho Tim Cook passed the baton to John Ternus, signaling a change in direction for the $4 trillion company. In a statement, the 65-year-old said Ternus would attempt to "make something better, bolder, more beautiful, and more meaningful." Ternus has been serving as the tech giant's senior vice president of Hardware Engineering. The changing of the guard comes at a time when Apple appears to have stalled in the AI race against the likes of OpenAI, Google, and Grok. Cook's tenure as CEO will end on September 1, bringing to an end an era defined by operational efficiency and financial growth Although he ushered Apple into its trillion-dollar era, Cook has often lived in the shadow of his predecessor. Analysts have built a mythos around company cofounder Steve Jobs, next to whom Cook has seemed perhaps too straight-laced. Now, Ternus will be expected to step up as both a master of managing sprawling operations and an innovation wizard for this new tech era. Ming-Chi Kuo, a tech analyst at TF International, wrote on X that one of Ternus's major achievements was overseeing the transition from Intel processors to the firm's own proprietary silicon. Kuo added: "Without this, there wouldn't be the success of today's MacBook Neo or the advantage Apple now holds as it gears up for AI devices." Meta Plans to Track Employee Keystrokes for AI Training Meta has found itself in hot water after reports emerged that it plans to track the computer activity of U.S. employees to help train its AI models. The software is expected to capture mouse movements, clicks, and keystrokes as the company looks to build AI agents capable of working more autonomously, Reuters first reported, citing an internal memo. According to the report, the company's Model Capability Initiative tool would run across work-related apps and websites, while also taking occasional snapshots of content displayed on employees' screens. Techopedia contacted Meta for comment, but an initial email bounced back. We will continue to seek a response. The move has already drawn criticism from privacy and ethics experts. Veith Weilnhammer, a Max Planck Fellow in Computational Psychiatry, wrote on LinkedIn: "Beyond questions about AI systems that emulate human behavior, such as their impact on the job market, privacy, and the growing commercial value of human behavioral knowledge, this raises an important societal issue: How should we govern access to human-computer interactions, and how can these data be used for public good?" For now, the data collection is reportedly limited to the U.S., with stricter privacy rules likely to make a similar rollout more difficult in Europe. UK Cyber Chief Warns Frontier AI Is Accelerating Exploit Discovery Britain's top cybersecurity official is expected to warn that frontier AI models are making it easier to discover and exploit software flaws at scale, as the UK confronts a rising mix of technological disruption and geopolitical threats. In remarks due to be delivered at the CYBERUK conference in Glasgow on Wednesday (April 23), National Cyber Security Centre chief executive Richard Horne is set to say that while AI has the potential to strengthen cyber defense, adversaries will also move quickly to weaponize the technology. Politico reported Horne will caution that frontier AI is already "rapidly enabling discovery and exploitation of existing vulnerabilities at scale," increasing pressure on organizations to patch systems, replace legacy technology, and improve basic cyber hygiene. Researchers said Anthropic's Mythos, for example, was too dangerous for general release because of its alleged ability to help users identify and exploit sophisticated vulnerabilities. And just like that, we've gone full circle back to Anthropic.

DiscordMercorAnthropic
Techopedia.com23h ago
Read update
Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia

Discord group says it accessed Claude Mythos by guessing location

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

DiscordMercorAnthropic
Mashable SEA23h ago
Read update
Discord group says it accessed Claude Mythos by guessing location

Discord group says it accessed Anthropic's unreleased Claude Mythos

An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning.

DiscordMercorAnthropic
Mashable1d ago
Read update
Discord group says it accessed Anthropic's unreleased Claude Mythos

Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

DiscordMercorAnthropic
Zero Hedge1d ago
Read update
Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic investigates unauthorized access to Mythos AI model after contractor credentials compromised

A small group exploited third-party vendor weaknesses to access an AI model capable of discovering thousands of zero-day vulnerabilities, forcing Anthropic to launch a $100M restricted access program. An AI model that can autonomously find over 1,000 zero-day vulnerabilities across major operating systems just got accessed by people who were never supposed to touch it. That's roughly the cybersecurity equivalent of leaving the keys to every lock in the building taped to the front door. Anthropic confirmed that its Claude Mythos Preview model, a system with genuinely alarming offensive cybersecurity capabilities, was breached by a small group of unauthorized users. The access was gained through compromised contractor credentials from a third-party vendor, combined with URL inferences gleaned from a separate data breach at Mercor, an AI training data provider. The incident occurred just two weeks after Anthropic publicly announced Mythos on April 7, 2026. Here's the thing about Mythos that makes this breach particularly unsettling. This isn't a chatbot that writes poetry or summarizes PDFs. Mythos was designed to discover security vulnerabilities autonomously, and it turned out to be disturbingly good at the job. The model identified thousands of zero-day vulnerabilities, which are security flaws unknown to the software vendor and therefore unpatched, across major operating systems and web browsers. Among its discoveries was a 27-year-old flaw in OpenBSD, a system widely regarded as one of the most secure operating systems ever built. In English: Mythos found holes in software that the entire global security community missed for nearly three decades. At the time the breach was discovered, over 99% of the vulnerabilities Mythos identified remained unpatched. That statistic alone explains why Anthropic wasn't exactly planning to hand out free trials. The model's capabilities represent a double-edged sword of historic proportions. In defensive hands, it's a revolutionary security tool. In the wrong hands, it's a skeleton key to the internet. The unauthorized users gained access within roughly 24 hours of the model's public announcement. The speed of the intrusion suggests either sophisticated planning or an opportunistic exploitation of already-compromised credentials. Either way, it exposed a fundamental weakness not in Anthropic's core infrastructure, but in the sprawling chain of third-party vendors that modern AI companies depend on. Anthropic's response was swift and expensive. The company launched Project Glasswing, a restricted access program designed to let vetted organizations use Mythos for defensive cybersecurity purposes while keeping the model locked away from everyone else. The program comes with $100 million in usage credits for participating organizations. That's a substantial investment, roughly signaling that Anthropic views this not as a PR crisis to manage but as an existential governance challenge to solve. The goal is straightforward: allow trusted entities like government agencies and financial institutions to leverage Mythos for identifying and patching vulnerabilities in their own systems, without creating pathways for malicious exploitation. Look, the concept sounds elegant on paper. In practice, restricting access to a model this powerful is like trying to put toothpaste back in the tube. Once the capabilities are known to exist, the incentive structure for bad actors to replicate or access them only intensifies. The breach itself has been categorized as a vendor security failure, which is a polite way of saying the weakest link wasn't Anthropic's own security but the credentials management practices of a contractor. This pattern is painfully familiar across the tech industry. Some of the most consequential breaches in history, from Target to SolarWinds, exploited third-party access points rather than primary defenses. This incident arrives at a moment when AI safety discourse has shifted from theoretical hand-wringing to concrete urgency. Government officials and financial sector leaders have reportedly begun urgent discussions about how to govern AI systems with capabilities this significant. For investors tracking the AI and cybersecurity sectors, the Mythos breach crystallizes several trends worth watching closely. First, the cybersecurity market is almost certainly about to see accelerated capital flows. When an AI model can find thousands of zero-day vulnerabilities that human researchers missed for decades, every organization with a digital footprint suddenly needs to reassess its defense posture. Companies specializing in vulnerability management, endpoint detection, and AI-powered security tools stand to benefit as enterprises scramble to adapt. Second, AI companies face a new category of reputational and regulatory risk. Anthropic built Mythos with defensive applications in mind, but the unauthorized access demonstrates that intent and outcome don't always align. Regulators will likely use this incident as evidence that voluntary safety commitments are insufficient, potentially accelerating mandatory compliance frameworks for AI developers. Any company building frontier AI models should be pricing in the cost of significantly more rigorous access controls and vendor audits. Third, the third-party vendor ecosystem is becoming a critical vulnerability surface for AI companies specifically. Traditional software companies have dealt with supply chain security for years, but AI models represent a unique challenge. The value of unauthorized access to a model like Mythos is orders of magnitude higher than access to a conventional enterprise software tool. This asymmetry between the value of the asset and the security of the access chain creates an extremely attractive target profile for sophisticated threat actors. The competitive landscape may also shift in interesting ways. Anthropic's willingness to invest $100 million in a controlled access program suggests that frontier AI companies will increasingly need to build security and governance infrastructure that rivals their research capabilities. That's expensive and complex, potentially favoring larger, better-capitalized players over smaller AI startups that lack the resources to manage models with dual-use potential. There's also a less obvious dynamic at play. Mythos's ability to discover vulnerabilities at scale could eventually become a net positive for overall internet security, if its deployment remains restricted to defensive applications. The 99% unpatched rate means the model has essentially generated a roadmap for fixing critical flaws across the software ecosystem. Whether that roadmap gets used for patching or exploitation depends entirely on how well Anthropic and its partners can maintain control. The Mercor data breach connection adds another layer of concern. It suggests that breaches at AI training data providers can have cascading effects, creating attack vectors that weren't previously considered. As the AI supply chain grows more interconnected, a security failure at one node can compromise systems several degrees removed. For what it's worth, Anthropic appears to be taking this seriously rather than defaulting to the standard corporate playbook of minimizing and moving on. The scale of the Glasswing investment and the speed of the response suggest genuine alarm at the leadership level. But the fundamental tension remains unresolved. Building AI systems powerful enough to autonomously discover zero-day vulnerabilities means building AI systems powerful enough to cause serious harm if control is lost. The Mythos breach didn't result in catastrophic exploitation, at least not that we know of yet. The next one might not be so uneventful. Bottom line: The Mythos incident is a live demonstration that AI safety isn't an abstract philosophical debate. It's an operational security problem with real-world consequences. How Anthropic, regulators, and the broader industry respond will set precedents for governing the most capable AI systems ever built. The $100 million question, literally, is whether restricted access programs can actually work when the incentives to break them are this high.

MercorAnthropic
Crypto Briefing1d ago
Read update
Anthropic investigates unauthorized access to Mythos AI model after contractor credentials compromised
Showing 1 - 20 of 93 articles