News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic Mythos Develops Into Insignificant Outcome * The Register

Anthropic's Mythos model is designed to discover software vulnerabilities, yet its release has stirred concern. Initially introduced under the Project Glasswing initiative, the model was restricted to select organizations for vulnerability assessment. Recent developments, however, reveal that unauthorized access to Mythos occurred, heightening cybersecurity concerns. Unauthorized Access Incident On a Wednesday, an Anthropic representative confirmed that individuals outside the Glasswing partners might have accessed the Mythos model. This access was not through Anthropic's authorized production API. The spokesperson stated, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The third-party vendor, linked to Anthropic's model development, has not been publicly identified. According to Bloomberg, a small group exploited their knowledge of the model's online location, derived from prior leaks, to gain access. Mercor Data Breach This unauthorized access coincided with a data breach at Mercor, an AI staffing firm that supplies contractors to major AI labs. Earlier in the month, Mercor acknowledged being affected by the LiteLLM supply-chain attack. Reports suggested that the intruders, identified as members of a private Discord channel, began accessing Mythos the same day Anthropic announced Project Glasswing. Mythos' Capabilities and Limitations Despite its marketing hype, early user feedback about Mythos indicates limitations. While organizations like AWS and Mozilla have praised its speed in identifying vulnerabilities, it has not outperformed elite human cybersecurity researchers. Mozilla's CTO, Bobby Holley, disclosed that Mythos found 271 vulnerabilities in Firefox but acknowledged that any vulnerabilities it discovered could also have been identified by skilled human researchers. Claims of Overhype Researchers have raised concerns about the veracity of the claims surrounding Mythos. While Anthropic touted its ability to discover "thousands of high- and critical-severity vulnerabilities," critics argue these numbers are exaggerated. For instance, VulnCheck researcher Patrick Garrity estimated the actual count at around 40, and no confirmed zero-day exploits were documented. Claims regarding 181 Firefox vulnerabilities were also scrutinized, revealing that most findings stemmed from environments without standard security measures. Concerns in the Cybersecurity Community Experts have mixed reactions about unauthorized access to Mythos. Snehal Antani, CEO of Horizon3.ai, stated the security community should not overreact. He emphasized that adversaries do not require Mytos for vulnerability research; existing open-source models already facilitate this process. * Unauthorized Access: Occurred via a third-party vendor. * Vulnerability Discovery: Mythos' findings are comparable to skilled human researchers. * Hype vs. Reality: Reports indicate exaggerated claims of Mythos' capabilities. The incident surrounding Anthropic's Mythos model illustrates the challenges of maintaining security and managing expectations in the rapidly evolving AI landscape. As the investigation continues, the cybersecurity community watches closely, evaluating the model's true potential and implications.

MercorDiscordAnthropic
El-Balad.com2h ago
Read update
Anthropic Mythos Develops Into Insignificant Outcome * The Register

A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.

AnthropicMercorDiscord
Fortune3h ago
Read update
A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

AI Startup Mercor Faces Lawsuit Over Data Breach | PYMNTS.com

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The $10 billion company, which has worked with the likes of Meta, has been served with at least seven class-action lawsuits in the wake of the breach, The Wall Street Journal (WSJ) reported Thursday (April 23). The suits allege the breach exposed Mercor contractor information that included job interview recordings, facial biometric data and screenshots of employees' computers. One suit, the report added, claims Mercor collected applicant-vetting data, such as background checks, which it shared with partners, in violation of federal regulations. According to plaintiffs, the company's practices include monitoring its contractors' computers and sharing that data with clients, using recorded candidate interviews to train AI models, and training client models on materials potentially owned by other companies. "We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," Mercor said in a statement to the WSJ. "We take the privacy of our customers, contractors, employees and those we interview very seriously, and we comply with all relevant laws and regulations," the statement added, noting that the startup acted quickly to remedy the breach, which affected several other companies. "We are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings," it said. The WSJ report added a comment from a Meta spokesperson that the company has paused its work with Mercor and is investigating the breach. PYMNTS wrote earlier this week about the "new consensus" being formed around the "data problem" beneath the race to deploy agentic AI. "More autonomous AI systems will raise the stakes for how data is created, governed, accessed and protected," that report said. "Synthetic data needs clearer standards. Real-world data needs tighter minimization. And the systems tying it all together need a stronger foundation of trust, security and control." Also this week, PYMNTS examined the changing cybersecurity landscape, arguing that while few of this year's high-profile incidents can be called "AI attacks," it is still hard to ignore the corresponding uptick in AI-powered offensive capability. "Anthropic's Claude Mythos Preview, for example, has reportedly demonstrated the ability to autonomously discover and exploit vulnerabilities across major operating systems and web browsers, including decades-old bugs in widely trusted systems," PYMNTS wrote.

MercorAnthropic
PYMNTS.com3h ago
Read update
AI Startup Mercor Faces Lawsuit Over Data Breach | PYMNTS.com

Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Project Glasswing In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." A Low-Tech Breach The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

DiscordAnthropicMercor
Signs Of The TImes3h ago
Read update
Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

AI recruiting startup Mercor hit with at least seven class-action lawsuits after hacking: What the company has to say

Mercor, a Silicon Valley startup valued at $10 billion, is facing a wave of legal trouble after a massive data breach exposed the private information of thousands of its contractors, a report has said. According to The Wall Street Journal, at least seven class-action lawsuits have been filed against the company in recent weeks after the company confirmed a third-party data breach.Mercor hires contractors to provide feedback that helps train artificial intelligence (AI) models for tech giants like OpenAI, Anthropic and Meta. However, a breach involving a third-party partner has reportedly leaked everything from recorded job interviews to facial scans and even screenshots of workers' private computer screens.The lawsuits, including one filed on Tuesday (April 22) in Northern California, do more than just complain about the hack; they offer a rare look at the aggressive tactics the company allegedly uses to gather data. According to the legal filings, plaintiffs claim Mercor engaged in several controversial practices, including tracking contractors' screens and sharing that private activity with clients; sharing background checks and applicant data with partners in ways that may violate federal regulations; using recorded video interviews of job candidates to train AI models without proper disclosure and training models on materials that might actually belong to other companies.In a public statement, the startup stood its ground, calling the lawsuits "speculative" and inaccurate. Regarding the data breach itself, the company noted that they were not the only victims."We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," Mercor said in a statement, according to the publication."We take the privacy of our customers, contractors, employees and those we interview very seriously, and we comply with all relevant laws and regulations," the company said, adding that it acted quickly to remediate the data breach. "We are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings," it said.Meta has already stopped working with Mercor and indefinitely suspended all work with the startup valued at $10 billion.

MercorAnthropic
The Times of India6h ago
Read update
AI recruiting startup Mercor hit with at least seven class-action lawsuits after hacking: What the company has to say

Workers Sue $10B AI Startup Mercor Over Alleged Data Collection and Exposure

A $10 billion AI startup that supplies training data to companies like OpenAI, Anthropic, and Meta is facing a wave of lawsuits over how it collects and handles sensitive worker data. According to a report by The Wall Street Journal, at least seven class-action lawsuits have been filed against Mercor in recent weeks following a third-party data breach that allegedly exposed contractor information. The lawsuits claim the breach included highly sensitive material, ranging from recorded job interviews to facial biometric data and screenshots of workers' computers. One class-action suit filed in Northern California claims Mercor collected extensive applicant data -- including background checks -- and shared it with partners in violation of federal regulations. Plaintiffs allege that Mercor's practices go beyond standard hiring processes. According to the suit, the company monitored contractors' computers, used recorded candidate interviews to train AI models, and may have trained client systems on materials owned by other companies. In one account cited in the lawsuit, a plaintiff alleged that workers were encouraged to use real company data in tasks, provided it was slightly altered or anonymised. When the plaintiff tried to avoid including sensitive details, reviewers reportedly pushed back, criticising the work as too vague. Another contractor alleged he encountered financial models and prompts that appeared to contain proprietary information, including what the lawsuit describes as "pre-project metadata, hidden defined names, institutional data-terminal markers, real lender or counterparty names, irregular numeric precision, and other features that raised serious provenance questions." Mercor has denied the allegations. "We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," the company said in a statement. It added that it "take[s] the privacy of our customers, contractors, employees and those we interview very seriously" and that it complies with relevant laws and regulations. The company also said it acted quickly to address the breach, noting that "we are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings." The case is drawing attention to how AI companies source the data used to train models. The Journal reports that Mercor previously attempted to buy work materials from individuals on LinkedIn, including documents those individuals did not necessarily own the rights to. Online postings also suggested the company offered payments for personal-finance files and even Google Maps histories. Workers also described a system of continuous monitoring. Contractors are required to install tracking software called Insightful, which captures screenshots of their computers during work sessions. One lawsuit alleges that this software recorded activity across hundreds of applications, including personal accounts, and that workers were not "clearly informed" of the extent of the monitoring. Mercor said it informs workers that screenshots may be taken during billing hours and instructs them to use only work-related applications while the software is active. The fallout is already affecting Mercor's relationships. Meta has paused its work with the company and is investigating the incident, according to a spokesperson cited by the Journal. Anthropic declined to comment, while OpenAI did not respond to requests. The situation highlights growing tension in the AI industry, where companies are under pressure to secure large volumes of high-quality data to train increasingly advanced models.

MercorAnthropic
Techloy7h ago
Read update
Workers Sue $10B AI Startup Mercor Over Alleged Data Collection and Exposure

Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement. According to reporting by Bloomberg, a small number of people who are members of a private Discord channel dedicated to researching unreleased AI models have had unofficial access to Mythis since it was first announced. Getting in was apparently simple, too. "To access Mythos, the group of users made an educated guess about the model's online location," Bloomberg said in an article published on April 21. "They based this on knowledge about the format Anthropic has used for other models, the person said, adding that such formatting details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers." Anthropic said it was aware of the access and was investigating the report. Shane Fry, Chief Technology Officer at RunSafe Security, said it was an example of how easily exploited AI models commonly are. "Unauthorised users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed," Fry said. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Germaine Tan Shu Ting, VP Security & AI Strategy and Field CISO at Darktrace, expressed similar concerns. "It shows that the frontline remains identity," Tan Shu Ting said. "If Anthropic itself can be accessed using traditional hacking methods (reportedly coopting existing third-party access and 'internet sleuthing'), then it highlights how critical it is to assume the threat is already inside the walls." However, while analysts and industry insiders have reacted to Mythos with something like awe, the actual capabilities of the model may, in reality, fall far short of Anthropic's claims. Don't believe the hype? Doug Britton, EVP and chief strategy officer of RunSafe Security, referred to Mythos and Project Glasswing earlier in April as a "watershed moment for AI's runaway zero-day discovery and exploitation". "AI is now uncovering memory safety bugs at massive scale, including vulnerabilities that have been hiding in production code for over 25 years - the problem isn't just that these bugs exist, it's that they're being found faster than organisations can fix them," Britton said. But the question is - are they being found that fast? Davi Ottenheimer, security engineer and president of security consultancy flyingpenguin, has some serious doubts. "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results," Ottenheimer said in a blog post around the time Mythios and Glasswing were announced. "The Glasswing consortium is regulatory capture dressed up poorly as restraint." Ottenheimer based his observations - rather caustic ones, it must be said - on Anthropic's own Claude Mythos Preview System Card, a "whoppingly inefficient 244-page document that devotes just seven pages to the claim that the model is too dangerous to release". According to Ottenheimer, only seven of those pages do not mention the acronyms one might expect: CVSS, CWE or CVE. "The flagship demonstration document turns out to be like the ending of the Wizard of Oz, a sorry disappointment about a model weaponising two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defence-in-depth mitigations stripped out. Anthropic failed, and somehow the story was flipped into a warning about its success." Ottenheimer has many issues with Anthropic's - and, it must be said, the wider media's - claims that Mythos found "Thousands of zero-day vulnerabilities in every major operating system and every major web browser", and he pulls no punches. Referencing that claim, Ottenheimer points out that the word 'thousands' is "used once, in reference to transcripts reviewed during the alignment evaluation". "It is never used to describe vulnerabilities. The cyber security section (Section 3, pages 47-53) contains no count of zero-days at all," Ottenheimer said. "With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate, why are you teasing us with the claims about vulnerabilities at all?" Cyber Daily has reached out to Anthropic for comment.

DiscordMercorAnthropic
cyberdaily.au12h ago
Read update
Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

Discord group says it accessed Claude Mythos by guessing location

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

DiscordMercorAnthropic
Mashable ME13h ago
Read update
Discord group says it accessed Claude Mythos by guessing location

Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

A group of unauthorized users has reportedly gained access to Mythos, the powerful cybersecurity tool recently unveiled by Anthropic, TechCrunch reported. This development is significant because Anthropic has explicitly warned that Mythos is capable of identifying and exploiting vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The company has framed this technology as a double-edged sword. They previously noted that in the wrong hands, it could become a potent hacking tool rather than the defensive asset it was designed to be for enterprise security. The unauthorized access was reportedly achieved by a small group of users operating within a private online forum. According to reports, these individuals managed to secure access to the tool on the same day it was publicly announced by Anthropic. The group, which is part of a Discord channel dedicated to hunting for information about unreleased AI models, used a mix of strategies to bypass restrictions. Perhaps most concerning is how the group managed to pinpoint the location of the model By making an educated guess about the model's online location, they relied on their existing knowledge of the naming conventions and formats Anthropic has used for previous models. This effort was reportedly aided by information revealed in a recent data breach from Mercor, an AI training startup that works with top developers. Furthermore, the group leveraged access provided by a person who is currently employed at a third-party contractor that works for Anthropic. This individual, who was interviewed about the breach, had legitimate permission to access Anthropic models and software related to evaluating the technology for the startup, which they gained through their contract work. Anthropic has been very cautious with the distribution of Mythos. The model was released only to a select number of vendors and organizations as part of an initiative called Project Glasswing. This limited release was specifically designed to prevent the tool from falling into the hands of bad actors who might weaponize it against corporate security. Big names like Apple, Amazon, and Cisco Systems are among the organizations that have been granted access to test the model. Amazon, which is a key partner and backer of Anthropic, also offers Mythos through its Bedrock platform to a very specific, approved list of organizations. As the utility of the tool has become known, a growing number of financial institutions and government agencies on both sides of the Atlantic have been clamoring to get on that list to better safeguard their own systems. In response to the reports, an Anthropic spokesperson provided a statement, saying, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The company has been quick to clarify that, so far, it has found no evidence that this unauthorized activity has impacted Anthropic's internal systems in any way. They maintain that the access appears to be contained within a third-party vendor's environment. While the situation sounds alarming, the source who spoke about the breach offered some perspective on the intentions of the group. The individual claimed that the users involved are primarily interested in playing around with new models rather than wreaking havoc. They have reportedly avoided running cybersecurity-related prompts on the Mythos model, choosing instead to experiment with tasks like building simple websites to avoid detection. The person also noted that this group has access to a variety of other unreleased Anthropic AI models, suggesting a broader scope of interest in the company's pipeline. This incident highlights the massive challenge Anthropic faces in keeping its most powerful and potentially dangerous technology from spreading beyond its approved partners. If these reports are accurate, it raises serious questions about how many other people might be using Mythos without permission and what their true objectives might be. For now, Anthropic is left to manage the fallout of this unauthorized access, which could potentially threaten the reputation of an exclusive release intended to bolster enterprise security. It is a stark reminder that even with strict initiatives like Project Glasswing, the digital perimeter is only as strong as its weakest link, especially when third-party vendors are involved in the deployment of such high-stakes software.

DiscordMercorAnthropic
Attack of the Fanboy20h ago
Read update
Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

Anthropic investigates unauthorized access to restricted Claude Mythos AI model Anthropic PBC is investigating a report that unauthorized users accessed Claude Mythos, the next-level artificial intelligence model the company says is powerful enough to enable dangerous cyberattacks. A small group of users in a private online forum gained access to Mythos on the same day Anthropic announced a limited testing release of the model, Bloomberg first reported Tuesday, citing a person familiar with the matter and documentation it had viewed. The group has been using the model regularly since, though not for cybersecurity purposes, the person said. The account was corroborated with screenshots and a live demonstration. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. The company said there is no indication the activity extended beyond the vendor or that its own systems were affected. The users reportedly gained entry through the credentials of a member of the forum who works for a third-party contractor that evaluates Anthropic models. The group combined those credentials with details from a data breach at artificial intelligence recruiting and training startup Mercor Inc. to locate the model. Bloomberg's source also claimed that the group has access to other unreleased Anthropic models. Anthropic has previously described Mythos as having a level of coding ability that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The company has restricted distribution to Project Glasswing, with a preview version that has been offered to Apple Inc., Amazon.com Inc., Cisco Systems Inc., CrowdStrike Holdings Inc., Google LLC, JPMorgan Chase & Co., Microsoft Corp. and Nvidia Corp., along with about 40 other organizations, so they can test and secure their own systems. Access to the model has also become a point of contention across the U.S. government. The National Security Agency and the Commerce Department's Center for AI Standards and Innovation already have access, according to reports and the Treasury Department is seeking it. The group using Mythos has so far avoided offensive tasks, reportedly to evade detection. Discussing the reports, Ram Varadarajan, chief executive officer at cyber deception technology company Acalvio Technologies Inc., told SiliconANGLE via email that "the Mythos breach didn't require a sophisticated attack." "It just required a contractor, a URL pattern and a Day-One guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue," explains Varadarajan. "This is the supply chain problem that perimeter-centric security has always underestimated: access controls are a policy, not an architecture and policies fail." Tim Mackey, head of software supply chain risk strategy at application security firm Black Duck Software Inc., noted that "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture the flag exercise, where success includes claims of unauthorized access to Mythos." "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind," said Mackey. "For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be only increases anxiety levels." "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI-enabled adversaries," added Mackey.

AnthropicMercor
SiliconANGLE21h ago
Read update
Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

Anthropic Mythos shaping up as nothingburger

And that unauthorized access? 'A nothing burger,' hacking startup CEO tells El Reg Anthropic's Mythos model is purportedly so good at finding vulnerabilities that the Claude-maker is afraid to make it available to the general public for fear that criminals will take advantage. But early analysis shows that Mythos may not be as dangerous as some would have you believe. Anthropic made Mythos available in preview to a select but ever-growing number of organizations under the title of Project Glasswing so they could find and fix vulnerabilities in their environment before criminals got hold of the purported zero-day machine and caused mayhem. That plan didn't quite work as intended. On Wednesday, an Anthropic spokesperson confirmed to The Register that some non-Glasswing partners may have accessed the model - but not through Anthropic's production API. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the spokesperson told us. The AI biz declined to name the third-party vendor, but said that it's a company Anthropic works with on model development. There's no evidence that unauthorized activity extended beyond the third-party vendor's environment or that Anthropic systems are affected, we're told. Bloomberg, which originally reported the unauthorized access, said that "a handful" of people gained access to Mythos by making "an educated guess about the model's online location" based on Anthropic's previous models, and that these details were revealed in the recent Mercor data breach. Mercor is an AI staffing startup that supplies specialized contractors to major AI labs, including Anthropic. Earlier this month, Mercor said that it was "one of thousands of companies" affected by the LiteLLM supply-chain attack. This group of unauthorized users reportedly belongs to a private Discord channel and gained access to Mythos on the same day that Anthropic announced Project Glasswing. Since then, it's been "playing around" with the bug-hunting machine, and doesn't have any interest in using the model for evil, according to Bloomberg. Regardless of what the group is doing with Mythos, their access illustrates a couple of key points. First: it's really hard to keep code under wraps (as also evidenced by Anthropic's earlier Claude Code source leak), especially when the folks who want to kick the tires on the new model are cybersecurity and engineering types - and they didn't even need to hack into any network or database to do it. Insider and supply-chain threats are the real deal. "The Mythos breach didn't require a sophisticated attack," Ram Varadarajan, CEO at Acalvio, a deception-tech firm, told The Register. "It just required a contractor, a URL pattern, and a day-one guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue." Additionally, considering all the hype Anthropic spun around its new model, we shouldn't be surprised the genie is out of the lamp. Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise, where success includes claims of unauthorized access to Mythos," Tim Mackey, head of risk strategy at supply chain security shop Black Duck, told The Register. That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers. "So far we've found no category or complexity of vulnerability that humans can find that this model can't," Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: "We also haven't seen any bugs that couldn't have been found by an elite human researcher." In other words, it's like adding an automated security researcher to your team. Not a zero-day machine that's too dangerous for the world. It's a nothingburger. The adversary doesn't need Mythos to hack you Anthropic, in announcing the new model, claimed Mythos identified "thousands of additional high- and critical-severity vulnerabilities." VulnCheck researcher Patrick Garrity, however, put the count as of last week at maybe 40. Or maybe none at all. Another engineer, Devansh, scoured the Mythos-related CVE advisories and Anthropic's exploit code, 44-prompt transcript, and 244-page system card, along with Glasswing partner agreements, red-team writeups. He also looked at Aisle's replication study, which tested Mythos' showcase vulnerabilities on small, cheap, open-weights models and found they produced much of the same analysis. Devansh ultimately concluded that while the bugs it found are real, the true Mythos story is "one of misinformation and hype." For example, the Anthropic-claimed 181 Firefox exploits ran with the browser sandbox turned off and the FreeBSD exploit transcript "shows substantial human guidance, not autonomy." Additionally, the "'thousands of severe vulnerabilities' extrapolates from 198 manually reviewed reports. The Linux kernel bug was found by Opus 4.6, the public model, not Mythos," Devansh said. Another researcher, Davi Ottenheimer, pointed out that the security section (Section 3, pages 47-53) of Anthropic's 244-page documentation "contains no count of zero-days at all. With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate." Ottenheimer likens it to "the ending of the Wizard of Oz, a sorry disappointment about a model weaponizing two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defense-in-depth mitigations stripped out." Snehal Antani, co-founder and CEO of offensive AI hacking company Horizon3.ai, told The Register, "attackers didn't need Mythos to accelerate vulnerability research, 4.6 and open source models have already been accelerating the vulnerability process." When asked if the security community should be concerned about unauthorized Mythos access, Antani said no. "In my honest opinion, it's a nothingburger," he told us. "The adversary doesn't need Mythos to hack you." ®

MercorDiscordAnthropic
TheRegister.com21h ago
Read update
Anthropic Mythos shaping up as nothingburger

Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia

Suswati Basu is a multilingual, award-winning editor. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award.... Anthropic is investigating reports that Claude Mythos Preview, an unreleased version of its AI model, may have been accessed without authorization through a third-party vendor environment tied to development work. Speaking to Techopedia, an Anthropic spokesperson said: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." Only a week after releasing the Claude Mythos Preview to a select group of organizations, people familiar with the matter said the reported activity appears linked to an external development platform rather than Anthropic's production API systems. The sources added there is no evidence at this stage that the incident extended beyond that external environment or affected the company's internal infrastructure. Anthropic's Mythos Model Is Already Being Put To The Test Anthropic has not said whether any data was removed or when the alleged access took place, however, Bloomberg reported that it took place on a "private online forum." The users were reportedly part of a private Discord group focused on uncovering details about unreleased AI models, using bots to scan unsecured websites such as GitHub for stray references posted by major labs. The news outlet reported that the group gained access to Mythos after some members made an educated guess about the model's online location using naming patterns Anthropic had used for earlier releases. Some of those clues allegedly emerged from a recent data breach involving Mercor, a startup that works with several leading AI developers. The irony is hard to miss. Anthropic positioned Mythos as a model so powerful that it required an unusually cautious rollout. It limited access to a small number of trusted partners because of fears it could be misused by hackers or destabilize cybersecurity defenses. Yet the first major controversy surrounding the system is not what Mythos itself might do. It's the possibility that third parties have already gained access through the carelessness of an external development partner. For a company that has built much of its identity around AI safety and controlled deployment, this risks reinforcing a familiar lesson in tech: A system is only as strong as the weakest link in its wider supply chain. Quite often, that weak link is basic human nature. Also in Tech News Tim Cook Steps Down As Apple Addresses Its AI Problem After more than a decade at the helm, Apple's head honcho Tim Cook passed the baton to John Ternus, signaling a change in direction for the $4 trillion company. In a statement, the 65-year-old said Ternus would attempt to "make something better, bolder, more beautiful, and more meaningful." Ternus has been serving as the tech giant's senior vice president of Hardware Engineering. The changing of the guard comes at a time when Apple appears to have stalled in the AI race against the likes of OpenAI, Google, and Grok. Cook's tenure as CEO will end on September 1, bringing to an end an era defined by operational efficiency and financial growth Although he ushered Apple into its trillion-dollar era, Cook has often lived in the shadow of his predecessor. Analysts have built a mythos around company cofounder Steve Jobs, next to whom Cook has seemed perhaps too straight-laced. Now, Ternus will be expected to step up as both a master of managing sprawling operations and an innovation wizard for this new tech era. Ming-Chi Kuo, a tech analyst at TF International, wrote on X that one of Ternus's major achievements was overseeing the transition from Intel processors to the firm's own proprietary silicon. Kuo added: "Without this, there wouldn't be the success of today's MacBook Neo or the advantage Apple now holds as it gears up for AI devices." Meta Plans to Track Employee Keystrokes for AI Training Meta has found itself in hot water after reports emerged that it plans to track the computer activity of U.S. employees to help train its AI models. The software is expected to capture mouse movements, clicks, and keystrokes as the company looks to build AI agents capable of working more autonomously, Reuters first reported, citing an internal memo. According to the report, the company's Model Capability Initiative tool would run across work-related apps and websites, while also taking occasional snapshots of content displayed on employees' screens. Techopedia contacted Meta for comment, but an initial email bounced back. We will continue to seek a response. The move has already drawn criticism from privacy and ethics experts. Veith Weilnhammer, a Max Planck Fellow in Computational Psychiatry, wrote on LinkedIn: "Beyond questions about AI systems that emulate human behavior, such as their impact on the job market, privacy, and the growing commercial value of human behavioral knowledge, this raises an important societal issue: How should we govern access to human-computer interactions, and how can these data be used for public good?" For now, the data collection is reportedly limited to the U.S., with stricter privacy rules likely to make a similar rollout more difficult in Europe. UK Cyber Chief Warns Frontier AI Is Accelerating Exploit Discovery Britain's top cybersecurity official is expected to warn that frontier AI models are making it easier to discover and exploit software flaws at scale, as the UK confronts a rising mix of technological disruption and geopolitical threats. In remarks due to be delivered at the CYBERUK conference in Glasgow on Wednesday (April 23), National Cyber Security Centre chief executive Richard Horne is set to say that while AI has the potential to strengthen cyber defense, adversaries will also move quickly to weaponize the technology. Politico reported Horne will caution that frontier AI is already "rapidly enabling discovery and exploitation of existing vulnerabilities at scale," increasing pressure on organizations to patch systems, replace legacy technology, and improve basic cyber hygiene. Researchers said Anthropic's Mythos, for example, was too dangerous for general release because of its alleged ability to help users identify and exploit sophisticated vulnerabilities. And just like that, we've gone full circle back to Anthropic.

AnthropicMercorDiscord
Techopedia.com1d ago
Read update
Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia

Discord group says it accessed Claude Mythos by guessing location

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

DiscordMercorAnthropic
Mashable SEA1d ago
Read update
Discord group says it accessed Claude Mythos by guessing location

Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

DiscordMercorAnthropic
Zero Hedge1d ago
Read update
Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic investigates unauthorized access to Mythos AI model after contractor credentials compromised

A small group exploited third-party vendor weaknesses to access an AI model capable of discovering thousands of zero-day vulnerabilities, forcing Anthropic to launch a $100M restricted access program. An AI model that can autonomously find over 1,000 zero-day vulnerabilities across major operating systems just got accessed by people who were never supposed to touch it. That's roughly the cybersecurity equivalent of leaving the keys to every lock in the building taped to the front door. Anthropic confirmed that its Claude Mythos Preview model, a system with genuinely alarming offensive cybersecurity capabilities, was breached by a small group of unauthorized users. The access was gained through compromised contractor credentials from a third-party vendor, combined with URL inferences gleaned from a separate data breach at Mercor, an AI training data provider. The incident occurred just two weeks after Anthropic publicly announced Mythos on April 7, 2026. Here's the thing about Mythos that makes this breach particularly unsettling. This isn't a chatbot that writes poetry or summarizes PDFs. Mythos was designed to discover security vulnerabilities autonomously, and it turned out to be disturbingly good at the job. The model identified thousands of zero-day vulnerabilities, which are security flaws unknown to the software vendor and therefore unpatched, across major operating systems and web browsers. Among its discoveries was a 27-year-old flaw in OpenBSD, a system widely regarded as one of the most secure operating systems ever built. In English: Mythos found holes in software that the entire global security community missed for nearly three decades. At the time the breach was discovered, over 99% of the vulnerabilities Mythos identified remained unpatched. That statistic alone explains why Anthropic wasn't exactly planning to hand out free trials. The model's capabilities represent a double-edged sword of historic proportions. In defensive hands, it's a revolutionary security tool. In the wrong hands, it's a skeleton key to the internet. The unauthorized users gained access within roughly 24 hours of the model's public announcement. The speed of the intrusion suggests either sophisticated planning or an opportunistic exploitation of already-compromised credentials. Either way, it exposed a fundamental weakness not in Anthropic's core infrastructure, but in the sprawling chain of third-party vendors that modern AI companies depend on. Anthropic's response was swift and expensive. The company launched Project Glasswing, a restricted access program designed to let vetted organizations use Mythos for defensive cybersecurity purposes while keeping the model locked away from everyone else. The program comes with $100 million in usage credits for participating organizations. That's a substantial investment, roughly signaling that Anthropic views this not as a PR crisis to manage but as an existential governance challenge to solve. The goal is straightforward: allow trusted entities like government agencies and financial institutions to leverage Mythos for identifying and patching vulnerabilities in their own systems, without creating pathways for malicious exploitation. Look, the concept sounds elegant on paper. In practice, restricting access to a model this powerful is like trying to put toothpaste back in the tube. Once the capabilities are known to exist, the incentive structure for bad actors to replicate or access them only intensifies. The breach itself has been categorized as a vendor security failure, which is a polite way of saying the weakest link wasn't Anthropic's own security but the credentials management practices of a contractor. This pattern is painfully familiar across the tech industry. Some of the most consequential breaches in history, from Target to SolarWinds, exploited third-party access points rather than primary defenses. This incident arrives at a moment when AI safety discourse has shifted from theoretical hand-wringing to concrete urgency. Government officials and financial sector leaders have reportedly begun urgent discussions about how to govern AI systems with capabilities this significant. For investors tracking the AI and cybersecurity sectors, the Mythos breach crystallizes several trends worth watching closely. First, the cybersecurity market is almost certainly about to see accelerated capital flows. When an AI model can find thousands of zero-day vulnerabilities that human researchers missed for decades, every organization with a digital footprint suddenly needs to reassess its defense posture. Companies specializing in vulnerability management, endpoint detection, and AI-powered security tools stand to benefit as enterprises scramble to adapt. Second, AI companies face a new category of reputational and regulatory risk. Anthropic built Mythos with defensive applications in mind, but the unauthorized access demonstrates that intent and outcome don't always align. Regulators will likely use this incident as evidence that voluntary safety commitments are insufficient, potentially accelerating mandatory compliance frameworks for AI developers. Any company building frontier AI models should be pricing in the cost of significantly more rigorous access controls and vendor audits. Third, the third-party vendor ecosystem is becoming a critical vulnerability surface for AI companies specifically. Traditional software companies have dealt with supply chain security for years, but AI models represent a unique challenge. The value of unauthorized access to a model like Mythos is orders of magnitude higher than access to a conventional enterprise software tool. This asymmetry between the value of the asset and the security of the access chain creates an extremely attractive target profile for sophisticated threat actors. The competitive landscape may also shift in interesting ways. Anthropic's willingness to invest $100 million in a controlled access program suggests that frontier AI companies will increasingly need to build security and governance infrastructure that rivals their research capabilities. That's expensive and complex, potentially favoring larger, better-capitalized players over smaller AI startups that lack the resources to manage models with dual-use potential. There's also a less obvious dynamic at play. Mythos's ability to discover vulnerabilities at scale could eventually become a net positive for overall internet security, if its deployment remains restricted to defensive applications. The 99% unpatched rate means the model has essentially generated a roadmap for fixing critical flaws across the software ecosystem. Whether that roadmap gets used for patching or exploitation depends entirely on how well Anthropic and its partners can maintain control. The Mercor data breach connection adds another layer of concern. It suggests that breaches at AI training data providers can have cascading effects, creating attack vectors that weren't previously considered. As the AI supply chain grows more interconnected, a security failure at one node can compromise systems several degrees removed. For what it's worth, Anthropic appears to be taking this seriously rather than defaulting to the standard corporate playbook of minimizing and moving on. The scale of the Glasswing investment and the speed of the response suggest genuine alarm at the leadership level. But the fundamental tension remains unresolved. Building AI systems powerful enough to autonomously discover zero-day vulnerabilities means building AI systems powerful enough to cause serious harm if control is lost. The Mythos breach didn't result in catastrophic exploitation, at least not that we know of yet. The next one might not be so uneventful. Bottom line: The Mythos incident is a live demonstration that AI safety isn't an abstract philosophical debate. It's an operational security problem with real-world consequences. How Anthropic, regulators, and the broader industry respond will set precedents for governing the most capable AI systems ever built. The $100 million question, literally, is whether restricted access programs can actually work when the incentives to break them are this high.

MercorAnthropic
Crypto Briefing1d ago
Read update
Anthropic investigates unauthorized access to Mythos AI model after contractor credentials compromised

Unauthorized Access to Anthropic's Mythos AI Model Reported by Bloomberg | ForkLog

Unauthorized access to Anthropic's Mythos AI model reported. A small group of unauthorized users gained access to Anthropic's new AI model, Mythos, according to Bloomberg, citing internal documents. The agency reports that several members of a closed online forum accessed the neural network on its release day and have been using it regularly since. Anthropic promotes Mythos as a system capable of detecting and exploiting vulnerabilities "in all major operating systems and web browsers." Consequently, the company has restricted access to a select group of software providers. To infiltrate the system, users employed several tactics: using credentials from an Anthropic contractor's employee, guessing the model's URL based on the company's other systems, and extracting additional information from a data leak at the startup Mercor. A Bloomberg source claims the group intends only to experiment with the new model and does not plan to cause harm. Besides Mythos, its members have access to several other unreleased Anthropic neural networks. "We are investigating a report of unauthorized access to the Claude Mythos Preview through one of our third-party environments," a company representative stated. This incident highlights the difficulty of controlling the spread of potentially dangerous technologies and raises the question of who else might gain access to Mythos and for what purposes. Mozilla reported on its blog that an early version of Mythos helped identify 271 vulnerabilities in the Firefox browser during internal testing. The issues have been resolved. The result demonstrated how advanced AI systems can analyze large codebases and identify weaknesses that previously required meticulous scrutiny by cybersecurity experts. Previously, Mozilla tested another Anthropic model, which identified 22 vulnerabilities in an earlier version of Firefox. Despite the new findings, the company acknowledges that achieving absolute security is an "unrealistic goal." The firm stated that all discovered errors could also have been found by a top-tier human researcher. "Some commentators believe that future AI models will discover entirely new forms of vulnerabilities beyond our current understanding. We do not share this view," the company noted. In April, media reported that the U.S. National Security Agency is using Mythos, despite the startup's conflict with the Pentagon.

MercorAnthropic
ForkLog1d ago
Read update
Unauthorized Access to Anthropic's Mythos AI Model Reported by Bloomberg | ForkLog

Anthropic Investigates Unauthorized Access to 'Mythos' AI After Private Discord Group Bypasses Restrictions

The group is said to be a part of a private Discord community that hunts for information about unreleased AI models. Earlier this month, Anthropic released a preview of what it described as its "most powerful model yet," called Mythos, which it said to have advanced cybersecurity capabilities. Experts and even Anthropic itself have warned that the model could be extremely dangerous in the wrong hands, potentially enabling severe cyberattacks faster than companies can respond. That concern is partly why the company opted for a limited rollout of the model to major technology and financial institutions under an initiative called Project Glasswing. Since the public reveal of the model, it has created a frenzy among security experts and U.S. government officials. Reports say the technology has even prompted emergency discussions between officials and major Wall Street banks some days ago. But despite the tight restrictions Anthropic placed on access to the model, a small group of outsiders reportedly gained entry anyway. According to a report from Bloomberg, a handful of users in a private online forum managed to access Mythos. The access allegedly occurred on the same day the model was announced for limited testing, though details are only now coming to light. The information came from an individual familiar with the situation, who reportedly provided screenshots and a live demonstration of the model to verify the claim. Unauthorized access to such a system raises concerns because of what the model is capable of doing. In Anthropic's own words, Mythos can identify and exploit vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." In simple terms, the model can scan software for security flaws. In theory, that capability could help organizations defend themselves or allow attackers to locate weaknesses in their systems. That dual-use potential is a key reason Anthropic restricted the release. The company reportedly shared access to Mythos with a small number of organizations, including companies such as Apple, Amazon, and Cisco Systems, allowing them to test their own infrastructure for vulnerabilities before a wider rollout. According to Bloomberg, the group responsible for the alleged unauthorized access is part of a private Discord community that searches for information about unreleased AI models. Members reportedly use bots and other tools to scan sites such as GitHub for technical clues. One individual in the group is said to have had contractor-level access to a third-party vendor environment used by Anthropic. That access reportedly helped the group get closer to the Mythos system. The method used to locate the model appears to have been surprisingly simple. The group allegedly made "an educated guess" about the model's online location based on knowledge of the naming patterns Anthropic uses for its systems. Some of those technical details were reportedly exposed in a recent data breach involving Mercor, a company that works with several AI developers. Responding to the report, Anthropic said it is investigating the situation. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement. Anthropic added that it currently has no evidence the reported access went beyond the vendor environment or affected its internal systems. According to Bloomberg's source, the group did not use the model to attempt cyberattacks. Instead, they reportedly ran simple tests, such as asking the model to build basic websites.

AnthropicMercorDiscord
Techloy1d ago
Read update
Anthropic Investigates Unauthorized Access to 'Mythos' AI After Private Discord Group Bypasses Restrictions

Anthropic Mythos AI Hacked Day One by Discord Group

A small group of unauthorized users accessed Anthropic's Claude Mythos Preview on the same day the company announced its controlled release, according to Bloomberg. The incident raises questions about Anthropic's ability to contain a model it deemed too dangerous for public release. How a Discord Group Walked Into Mythos Members of a private Discord channel dedicated to hunting unreleased AI models made an educated guess about the Mythos endpoint URL. "Anthropic said Mythos was too dangerous to release. Then four random guys in a Discord gained access on day one by guessing the URL...," wrote Josh Kale, a popular user on X. They reconstructed Anthropic's naming conventions using data exposed in the Mercor breach three weeks earlier, Bloomberg reported, citing a person familiar with the matter. One group member also held legitimate evaluation credentials through contract work for an Anthropic vendor. Those credentials, combined with the guessed URL, granted the group ongoing access. The users have reportedly been running Mythos regularly since gaining entry. However, they have avoided cybersecurity-related prompts and instead used it for benign tasks like building simple websites. Anthropic confirmed it is investigating the report but said it has found no evidence the access extended beyond the vendor environment. Anthropic has said Mythos can identify and exploit zero-day vulnerabilities in every major operating system and web browser. Under Project Glasswing, the company restricted access to roughly 40 approved organizations, including Apple, Amazon, and Cisco, strictly for defensive security testing. White House Pushes Federal Access Despite Pentagon Ban The breach comes as the White House moves to expand Mythos access to civilian federal agencies. The Office of Management and Budget emailed Cabinet officials on April 15 outlining plans for a safeguarded version of the model. This represents a reversal from earlier this year, when the Pentagon designated Anthropic a "supply chain risk" after the company refused to remove safety guardrails for military use. "We will not let ANY company dictate the terms regarding how we make operational decisions," Defense Department spokesman Sean Parnell wrote on X. A federal judge later paused the broader ban following an Anthropic lawsuit. Anthropic CEO Dario Amodei met White House officials on April 17, with both sides calling talks "productive." The NSA has already been using Mythos for vulnerability scanning despite the Pentagon blacklist, according to Axios.

DiscordMercorAnthropic
BeInCrypto1d ago
Read update
Anthropic Mythos AI Hacked Day One by Discord Group

Mercor's 23-Year-Old Billionaire Founders Grapple With Employee Fraud And North Korean Infiltration

Founded in 2023 by 20-somethings, data labeling startup Mercor exploded to $1 billion in annualized revenue run rate in September. Now it's confronting a wave of challenges, including an employee stealing money, security blunders and cultural growing pains. uring an all-hands meeting earlier this year at data labeling startup Mercor, its then 22-year-old billionaire CEO Brendan Foody pulled up a slide with a single word: fraud. An employee had embezzled company funds, he told his staff of more than 200. The person had since been fired. There would be no tolerance for this behavior, Foody said, according to four people familiar with the meeting. Foody didn't identify the employee or disclose the amount stolen at the meeting. But Forbes has learned that the culprit was an early hire and lead manager on the Anthropic account, one of the company's most important, where Mercor's contractors create training data to help build Claude. Multiple former Mercor employees said the manager had recruited his brother and father as "experts" and sent them hundreds of thousands of dollars in so-called bonus payments. He was reported in late December after it was discovered that contractors were paid more than the amount billed to Anthropic for multiple data generation projects, two sources said. Anthropic was not aware of the incident, they added. Mercor eventually recovered the fraudulent bonus payments and it did not end up costing customers any money, Mercor spokesperson Heidi Hagberg told Forbes. The former Anthropic account lead, whom Forbes is not identifying, declined to comment for this story. Anthropic declined to comment. It's just one episode in what more than a dozen former employees describe as a series of operational mishaps at Mercor, a fast-growing startup that has recruited 50,000 highly-skilled experts -- PhDs, lawyers, bankers, scientists and programmers -- to create training data for big AI labs like OpenAI. It's been hugely successful so far: In September 2025, Mercor's annualized revenue run rate crossed $1 billion, or $83.3 million in monthly revenue, according to a person familiar with the company. Founded in 2023 by three longtime friends who met on the high school debate team, Mercor has become a poster child for booming Silicon Valley AI startups run by unusually young, unusually wealthy founders. The three cofounders -- Foody, CTO Adarsh Hiremath, and board chairman Surya Midha -- were 22 years old when they became the world's youngest self-made billionaires in October, after raising $350 million from storied VCs like Felicis, Benchmark and General Catalyst at a $10 billion valuation, Forbes reported. Mercor employees suspected that North Korean operatives had worked for the company by using stolen credentials to skirt identity checks, multiple sources told Forbes. Beyond the fraud incident, Mercor has suffered from a number of security problems in recent months, according to interviews with five former staffers. One example: As early as November 2024, and continuing until recently, Mercor employees suspected that North Korean operatives had worked for the company by using stolen credentials to skirt identity checks, multiple sources told Forbes. In a number of instances, the suspected operatives produced data for American AI labs such as Anthropic, the sources said. Internally, they were referred to as "NKs," one former employee said, and were known to be among the best at the code-writing tasks contractors were asked to do. "They would work 80 hours a week and produce the cleanest code," this person told Forbes. One of the first people employees suspected was a project lead who "everybody trusted a lot and was given a lot of responsibility," another former Mercor employee said. Employees used fraud detection systems to confirm their suspicions. He was later fired. One former Mercor employee described looking at one of the video interviews that experts record when they are onboarded onto the platform, expecting to find a person working from a home office, as many experts did. But instead, the person was working in a drab office, with many other people visible in the background who were wearing the same black, over-the-ear headphones. When he looked at another interview for another contractor, he saw the same scene from a different vantage point. Got a tip? Contact Rashi Shrivastava at [email protected] or rashis.17 on Signal or Anna Tong at [email protected] or (650)468-3913 on Signal. Former employees said Mercor tried to address the issue by testing a trio of different screening companies and establishing a three-person fraud team. Employees also created and circulated an internal guide on how to identify the so-called "NKs," one of the sources said. The company now works with identity verification software firm Persona to conduct these checks. "Multiple frontier labs have said we have industry leading fraud detection. That is because we have invested heavily in our fraud-detection processes and team, including around-the-clock monitoring and IP-blocking, to prevent and detect any misuse of our platform," Hagberg said. The North Korean issue is industry-wide: For years North Koreans have tried to infiltrate American companies via remote jobs, sending millions of dollars back home to fund illegal weapons programs, CNN reported in August. That has trickled into the data labeling industry too. At Mercor, former employees expressed concern that the suspected North Koreans would have been able to see what kinds of training data frontier AI labs prioritize -- information the labs guard as proprietary trade secrets. A senior executive patrols the office at 9 p.m. taking notes of who's not at their desks, two former employees told Forbes. Mercor has also faced a more severe security breach that could cost it at least one major client. In early April, the company said it was among the thousands of companies targeted in a massive hack linked to open source project LiteLLM. Meta told Forbes that its work with Mercor is "paused" while the social media giant investigates the breach. Now, other frontier labs, including OpenAI, are evaluating their work with the startup as they investigate whether their proprietary training data was exposed, multiple sources told Forbes. OpenAI declined to comment. "Nearly every customer has been business-as-usual and has continued to start new projects with us throughout our third-party investigations," Mercor spokesperson Hagberg said. She said Mercor's security team is conducting an investigation with external parties and has moved to remedy the breach. The startup has also been hit with at least six lawsuits from contractors alleging Mercor's negligence led to the exposure of private data like Social Security numbers, full names and other customer data, according to federal court filings. Mercor declined to comment on ongoing litigation. The stakes are high for Mercor: AI labs have a slew of options for data labelers, and can switch quickly to new providers. There's Scale, whose former CEO, Alexandr Wang, previously held the title of the world's youngest self-made billionaire; Invisible Technologies, valued at more than $2 billion in September 2025; Surge, whose founder Edwin Chen is the youngest billionaire on the Forbes 400 list; Turing AI, which raised $110 million in July at a $2.2 billion valuation. Even newer entrants like micro1, which crossed $300 million in annualized revenue this month, and Handshake, which has more than $850 million in annualized revenue per a source familiar, are quickly gobbling up market share. 'The intensity might not be for everybody' As Mercor scaled from less than 40 employees a year ago to almost 300 today, former employees said the company's culture dramatically shifted. Employees describe an intense "996" work culture, where it's common to work at least 9 a.m. to 9 p.m. six days a week. Priorities and project scopes shift quickly. Timelines are often compressed and unrealistic. A senior executive patrols the office at 9 p.m. taking notes of who's not at their desks, two former employees told Forbes. And after an abrupt change in pay structure for some project leaders, the company suffered a wave of departures, multiple sources said. "We don't mandate hours, we expect people to work hard and match the pace of our customers who are the most consequential companies in the world," Mercor spokesperson Hagberg said. "The intensity might not be for everybody and that's okay." The shift appears to have begun in May 2025, when Mercor hired Sundeep Jain, Uber's former product chief, as its first president to build out the business. Jain was responsible for overseeing new hires and management processes as well as finding better ways to track and report data to clients, the cofounders told Forbes in September. In a recent internal talent survey seen by Forbes, Mercor leadership asked employees to anonymously rat out colleagues, asking "Who on the team do you think lowers the bar?" When Jain took over, internal structures and processes changed. In October, he altered the compensation structure for Mercor's strategic project leads (SPLs), who manage budgets and recruit experts for data labeling projects. Four sources said that of Mercor's 30+% profit margin on its data labeling projects, SPLs had been rewarded with 5% in cash and 10% in equity. Instead of commissions tied to revenue, SPLs would now be paid bonuses based on performance reviews, according to multiple former employees. Under the new system, high performers made more money while low-performers made less, a standard way to incentivize employees at many fast-growing startups, Mercor spokesperson Hagberg said. But several SPLs viewed the new performance-review system as arbitrary and unfair. Some said they received less commission than they were promised, according to multiple sources. "Mercor's compensation is in the 99th percentile, according to a leading compensation consulting company. It has always been consistent with compensation shared on offer letters," Hagberg said. Several months after Jain arrived, cofounder and chief operating officer Midha stepped away from day-to-day operations, transitioning to the role of chairman of the board in October 2025. Recently, Mercor cofounder and CTO Hiremath was promoted to Co-CEO. The company has said that he will be leading a newly established enterprise offering that helps companies build agents for their internal workflows. Two former employees told Forbes that Foody and Hiremath have had many disagreements, are rarely seen speaking to each other and work from offices on different floors. "Brendan and Adarsh are best friends and have been since high school. They talk every day. They have a shared vision and a shared purpose," Hagberg said. Executives also encouraged a cutthroat culture: In a recent internal talent survey, Mercor leadership asked employees to anonymously rat out colleagues, asking "Who on the team do you think lowers the bar?", noting that only "ABS," an acronym referring to the first initials of Hiremath, Foody and Jain, would see the answers, according to a copy of the survey viewed by Forbes. They also asked which employees raised the bar, a source said. At least one former employee landed a cushy exit. Two sources told Forbes the former employee accused of siphoning money has already received investment from BoxGroup for a new venture that would create a fully autonomous company in the marketing realm. Just on the idea alone, BoxGroup invested $1.5 million, one source said. The firm did not respond to a request for comment.

MercorAnthropic
Forbes8d ago
Read update
Mercor's 23-Year-Old Billionaire Founders Grapple With Employee Fraud And North Korean Infiltration

LinkedIn is Testing an "AI Labor Marketplace" to Rival Mercor and Scale AI

LinkedIn is joining the growing list of companies building businesses around AI training. LinkedIn is testing an "AI labor marketplace." In a move first reported by Business Insider, the Microsoft-owned platform is now recruiting subject matter experts to train the next generation of generative AI models, offering hourly rates that reach as high as $150. The strategy marks a shift for LinkedIn, which has historically focused on traditional corporate hiring. Now, the company is leveraging its database of over one billion verified professionals to compete directly with AI staffing companies like Scale AI and Mercor. According to LinkedIn data, "AI training" has become one of the fastest-growing job categories in the U.S. market. Unlike general data labeling, these roles require deep industry nuance -- a "human-in-the-loop" approach where experts rate chatbot accuracy and challenge systems with complex, real-world scenarios. What LinkedIn is paying: A LinkedIn spokesperson confirmed to Business Insider that the company is in "early testing" for this new marketplace. By matching frontier AI labs -- such as OpenAI and Anthropic -- with verified human talent, LinkedIn is positioning itself as a premium supplier of high-quality training data. This move comes at a critical time for the industry. Competitors like Mercor have seen valuations soar to $10 billion but are currently grappling with the fallout of significant data breaches. By contrast, LinkedIn is betting that its established reputation for professional verification will give it the edge in a market where "context" is becoming more valuable than raw code. For users, the entry into this market is largely "opt-out." LinkedIn has already begun utilizing profile data from several regions to train its proprietary models. However, the new "labor marketplace" allows professionals to monetize their expertise actively. LinkedIn has rolled out features that allow members to receive direct notifications when these high-paying AI training opportunities become available. To find these roles, users are encouraged to search for "AI Content Analyst" or "AI Trainer" within the platform's job portal.

AnthropicMercor
Techloy9d ago
Read update
LinkedIn is Testing an "AI Labor Marketplace" to Rival Mercor and Scale AI
Showing 1 - 20 of 66 articles