News & Updates

The latest news and updates from companies in the WLTH portfolio.

Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

MercorDiscordAnthropic
WebProNews2h ago
Read update
Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

Anthropic's locked-down Mythos leaks

The Rundown: Access to Anthropic's Mythos model reportedly leaked into a Discord group within days of launch, after the users reportedly guessed the company's deployment URL and naming using patterns leaked in the recent Mercor breach. The details: Why it matters: The first alleged unauthorized use of the AI model that had the White House and others calling emergency meetings didn't come from China, Russia, or another rival nation -- it came from a random Discord group. Not a great start, and the problem only compounds as partner access grows and the models get more dangerous.

AnthropicMercorDiscord
The Rundown AI7h ago
Read update
Anthropic's locked-down Mythos leaks

Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion. Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered. And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood. Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks. This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers? Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech. So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

CHAOSMercorDiscordAnthropic
WebProNews20h ago
Read update
Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

Report: Discord Group Uses Claude's Supposedly Secret Mythos

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , The Future of AI & Cybersecurity An unauthorized group of users gained access to Claude Mythos Preview artificial intelligence model and have regularly used it since the day that AI firm Anthropic revealed the model's existence while pronouncing it too dangerous to release to the public, reports Bloomberg. See Also: Context Drives Security in Agentic AI Era Anthropic made a splash when earlier this month it said it would reserve access to a select group of companies joined together under "Project Glasswing," with the understanding that they would use the model to find and fix security vulnerabilities before hackers get access to equally powerful tech. Members include Nvidia, Apple, Amazon and Cisco (see: Anthropic Calls Its New Model Too Dangerous to Release). A source told Bloomberg the unauthorized users belong to a private Discord channel dedicated to unreleased models. An apparent member of the group told the newswire that users have not used Mythos to hunt for new exploits. Anthropic has touted the vulnerability finding properties of Mythos in a publicity campaign that has received some outside validation, as from the AI Security Institute in Great Britain, which found the model to be "a step up over previous frontier models." The source told Bloomberg the Discord group deployed a mix of tactics to access Mythos, including using access the source has as a third-party contractor for Anthropic. The group "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." The person said such data was leaked in a recent breach at AI startup Mercor (see: Mercor Breach Linked to LiteLLM Supply-Chain Attack). An Anthropic spokesperson told the newswire that it is investigating the matter but that it has no evidence of unauthorized Mythos use beyond the third party's IT environment. The source told Bloomberg the Discord group also has access to other unreleased Anthropic models. Anthropic's release of Mythos to select partners received a rejoinder from rival firm OpenAI, which days later released GPT‑5.4‑Cyber with the stated intention of making it "as widely available as possible." OpenAI said it will rely on user identity verification and "trust signals" to safeguard its vulnerability-seeking AI model from being put to bad uses (see: OpenAI Touts Wider Access to Its New Cyber Model).

DiscordMercorAnthropic
DataBreachToday1d ago
Read update
Report: Discord Group Uses Claude's Supposedly Secret Mythos

Discord group says it accessed Anthropic's unreleased Claude Mythos

An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning.

DiscordMercorAnthropic
Mashable1d ago
Read update
Discord group says it accessed Anthropic's unreleased Claude Mythos

Unauthorized users breach Anthropic's restricted Mythos AI model

A small group of unauthorized users gained access to Anthropic's new AI model Claude Mythos, Bloomberg reports. Anthropic considers Mythos powerful enough to enable dangerous cyberattacks, which is why the company only makes it available to select partners like Apple, Amazon, and Cisco through its "Project Glasswing" program. The users, members of a private Discord channel, got in on the day of the announcement. They pulled it off using the access credentials of a member who works as a contractor for Anthropic, along with publicly available information from a data leak at AI startup Mercor. According to Bloomberg, the group didn't use Mythos for cyberattacks but for harmless tasks like building simple websites for testing. The source says the group also has access to a number of other unreleased Anthropic AI models. The company says it's investigating the incident. So far, there's no indication that the access extended beyond the external contractor's environment or that Anthropic's own systems were compromised.

AnthropicDiscordMercor
THE DECODER1d ago
Read update
Unauthorized users breach Anthropic's restricted Mythos AI model

Mercor subjected to several breach-related lawsuits

According to a case document, the breach resulted in the exfiltration of consumers' and employees' sensitive personal identifiable information. It also led to Meta putting their work with Mercor on pause. Mercor was reportedly one of "thousands" of affected companies that used the breached interface. Plaintiffs allege that the incident led to breach of privacy, breach of implied contract, negligence, unjust enrichment, and violation of California's Unfair Competition Law. They are asking for injunctive relief, compensation for out-of-pocket costs linked to detection, recovery, and prevention from identity theft, unauthorized use of data, and fraud. While the number of affected individuals has not been disclosed, the independent contractors suing Mercor allege that the putative class consists of at least 100 Mercor clients and employees.

Mercor
SC Media8d ago
Read update
Mercor subjected to several breach-related lawsuits

Sources: data labeling startup Handshake's gross annualized revenue hit ~$1B, vs. $550M in January; Mercor hit a $1B+ gross annualized revenue pace this year

@worldlibertyfi: Does anyone still believe @justinsuntron? Justin's favorite move is playing the victim while making baseless allegations to cover up his own misconduct. Same playbook, different target. WLFI isn't the first. We have the contracts. We have the evidence. We have the truth. See you in court pal. As an early supporter who invested heavily in World Liberty Financial, I did so because I believed in the vision that was presented to the public: a decentralized finance platform that would promote financial freedom, remove intermediaries, and bring the benefits of DeFi to mainstream Americans. What was never disclosed -- to me or to any investor -- is that World Liberty embedded a backdoor blacklisting function in the smart contract used to deploy WLFI tokens. This function gives the Company unilateral power to freeze, restrict, and effectively confiscate the property rights of any token holder, without notice, without cause, and without recourse. This is the opposite of decentralization. This is a trap door marketed as an open door.

Mercor
Techmeme9d ago
Read update
Sources: data labeling startup Handshake's gross annualized revenue hit ~$1B, vs. $550M in January; Mercor hit a $1B+ gross annualized revenue pace this year

Meta Halts Work With Mercor After Major Breach, While ChatGPT-Parent OpenAI Investigates Incident: Report

Meta Platforms, Inc. (NASDAQ:META) has reportedly paused its work with data contractor Mercor following a major security breach. Meta Freezes Mercor Work Amid Security Concerns The suspension is indefinite, with sources indicating other AI labs are also reevaluating ties with Mercor, Wired reported last week, citing two sources. Mercor plays a critical role in building custom datasets used to train advanced AI systems. Meta did not immediately respond to Benzinga's request for comments. OpenAI Investigates, Says User Data Not Impacted OpenAI, the parent of ChatGPT, has not halted its projects with Mercor but is investigating the incident. A spokesperson told the publication that the breach in no way affects OpenAI user data, though the company is reviewing whether its proprietary datasets were exposed. OpenAI and Anthropic did not immediately respond to requests for comment. Supply Chain Hack Linked To LiteLLM Compromise The breach appears tied to a compromise of LiteLLM, a widely used AI integration tool. Cybercriminal group TeamPCP is believed to be behind the attack, which may have impacted thousands of organizations in a broader supply chain campaign. In an email to staff on March 31, Mercor said, "There was a recent security incident that affected our systems along with thousands of other organizations worldwide." Confusion Over Hacker Claims, Worker Impact Emerges A group using the name Lapsus$ also claimed responsibility, though researchers dispute the link, the report said. However, researchers note that multiple cybercriminal groups now intermittently adopt the Lapsus$ name. They said, Mercor's confirmation of a LiteLLM link suggests the breach was likely carried out by TeamPCP or an affiliated actor. Mercor did not immediately respond to Benzinga's request for comments Price Action: META shares closed at $574.46 on Thursday, down 0.82%, according to Benzinga Pro. Benzinga Edge Stock Rankings show Meta remains in a downtrend across short, medium and long-term time frames, despite placing in the 90th percentile for Quality. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo: PJ McDonnell / Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

AnthropicMercor
Benzinga17d ago
Read update
Meta Halts Work With Mercor After Major Breach, While ChatGPT-Parent OpenAI Investigates Incident: Report

What happened to Meta's work with Mercor?

Meta has suspended its collaboration with Mercor, a data startup that provides AI training data services, after a supply‑chain style security incident exposed what could be sensitive training information. The pause is part of a wider pattern across the AI industry: data pipelines are becoming high‑value targets, and a single breach can disrupt downstream training and partnerships. The coverage frames Mercor as "the AI industry's most closely guarded" training-data supply chain, and indicates that the breach created operational and security risks significant enough for Meta to halt work while it investigates. At the same time, other items in the feed link similar scrutiny to OpenAI regarding separate security incidents affecting its ecosystem. Together, the stories reinforce that AI labs and platforms increasingly treat data-provider security as mission-critical infrastructure rather than a peripheral vendor risk. The reason these partnerships are vulnerable is structural: training and tuning often depend on third-party datasets and processing services, so a compromise can leak details about how models are trained, how data is curated, or what internal processes look like. The immediate consequences described in the feed are: Even without details on the exact scope of the leaked information, the news is material: the availability of training data and the confidentiality of training workflows are central to AI competitiveness. For enterprises and researchers building around these systems, vendor security incidents can quickly translate into reduced access, delayed releases, or changes to data procurement strategies.

Mercor
AllToc18d ago
Read update
What happened to Meta's work with Mercor?

AI recruiting startup Mercor hit by cyberattack; Meta halts collaboration - The Economic Times

As per media reports, Mercor was among thousands of firms affected by the compromise of LiteLLM. Even as Mercor has claimed that the malicious code was detected and removed, the breach drew attention because LiteLLM is widely used. LiteLLM has since strengthened its compliance measures, switching from the controversy-hit compliance startup Delve to Vanta for certifications.

Mercor
Economic Times19d ago
Read update
AI recruiting startup Mercor hit by cyberattack; Meta halts collaboration - The Economic Times

Meta pauses Mercor work after major data breach

AI firms review data security risks as Mercor breach triggers industry-wide concern Meta has paused its work with data contracting firm Mercor after a major AI data breach, raising concerns across the industry. The decision came after a cyberattack impacted Mercor's systems, with the company and its partners now assessing the scope of the breach and potential exposure of sensitive training data. Sources close to Meta reveal that the company has suspended all collaborations with Mercor pending an investigation of the AI data breach scandal. Several other big AI companies are also reconsidering their relations with the startup amidst the developing scandal. Mercor has an important function in the world of AI. It generates huge amounts of human-generated data necessary for companies such as OpenAI and Anthropic to create sophisticated algorithms. The data is very confidential because it shows how companies operate to create their AI software. It is possible that in the AI data breach, proprietary datasets were compromised, but the real value of the stolen data for other competing firms is not yet known. According to a statement from OpenAI, there was no leak of user data. Mercor informed staff about the breach in late March, stating that its systems were impacted alongside thousands of other organisations. The ongoing situation has left contractors who work on Meta-related projects without any way to record their work hours, which has resulted in work shortages for some individuals. Security researchers believe the breach may be tied to compromised updates of an AI tool called LiteLLM, which has the potential to impact thousands of organisations. The hacking group TeamPCP has emerged as the main suspect for the attack, while multiple other groups have stepped forward to take credit.

MercorAnthropic
The News International19d ago
Read update
Meta pauses Mercor work after major data breach

Meta pauses AI training work with Mercor after data breach

Mercor is one of the few companies that OpenAI, Anthropic, and other AI labs rely on for generating training data. The firm employs a vast network of human contractors to create custom, proprietary datasets for these labs. These datasets are usually kept under wraps as they play a crucial role in developing valuable AI models that power products like ChatGPT and Claude Code. The data generated by Mercor is highly sensitive as it can give competitors, including other US and Chinese AI labs, insights into their training methods. However, it remains unclear if the information leaked in Mercor's breach would significantly benefit a competitor. OpenAI is currently investigating the incident to determine how its proprietary training data may have been compromised but has assured that the incident does not affect user data.

AnthropicMercor
NewsBytes19d ago
Read update
Meta pauses AI training work with Mercor after data breach

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Mercor
IT Security News - cybersecurity, infosecurity news19d ago
Read update
Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk - IT Security News

Meta Pauses Work With Mercor Following Major AI Data Breach - News Directory 3

AI labs treat this training data as highly confidential because it can reveal the specific methodologies used to refine their models. Meta has indefinitely paused all collaboration with the data contracting firm Mercor following a major security breach. The company is currently investigating the incident to determine if sensitive AI training secrets were exposed, a move that highlights the critical importance of proprietary datasets in the development of large-scale AI models. Mercor serves as a key vendor for several of the world's leading AI labs, including Meta, OpenAI and Anthropic. The startup manages massive networks of human contractors to generate bespoke, proprietary datasets. These datasets are essential ingredients for training the models that power products such as ChatGPT and Claude Code. AI labs treat this training data as highly confidential because it can reveal the specific methodologies used to refine their models. Such disclosures could provide competitors, including other labs in the United States and China, with key insights into the proprietary recipes used to create competitive AI advantages. Mercor confirmed the security incident in an email to its staff on March 31, 2026. In the message, the company stated, There was a recent security incident that affected our systems along with thousands of other organizations worldwide. The breach has been linked to compromised updates to LiteLLM, an AI API tool. While a group using the name Lapsus$ claimed responsibility for the attack and reportedly listed the data for live auction, some researchers suggest that an attacker group known as TeamPCP is the likely culprit. While Meta has taken the step of halting all work, other AI labs are taking different approaches. OpenAI has not stopped its current projects with Mercor, though a company spokesperson confirmed that OpenAI is investigating the incident to see how its proprietary training data may have been exposed. The OpenAI spokesperson clarified that the security incident in no way affects OpenAI user data. Other major AI labs are reportedly reevaluating their work with Mercor as they assess the full scope of the breach. Anthropic did not immediately respond to requests for comment regarding its partnership with the vendor. The indefinite pause in collaboration has created immediate instability for the contractors Mercor employs to build its datasets. On April 2, 2026, a Mercor employee informed contractors that those staffed on Meta projects are unable to log hours until the projects resume. Because Meta's pause is indefinite, these contractors may be functionally out of work while the company continues its investigation into the risks posed by the breach.

AnthropicMercor
News Directory 319d ago
Read update
Meta Pauses Work With Mercor Following Major AI Data Breach - News Directory 3

AI Training Data Giant Mercor Is Reportedly Looking to Buy the Work You Did at Your Old Job

Source: Gizmodo If you feel like your previous employer didn't properly compensate you, there might be a way to cash in on that work -- though it seems legally (and, depending on how you feel about artificial intelligence, morally) dubious. According to a report from the Wall Street Journal, AI training data giant Mercor is offering people payment in exchange for selling their prior work materials. Per the report, Mercor has been poking around a number of industries, including the entertainment space, and asking professionals if they'd be willing to sell stuff from previous jobs. Visual effects artists told the Journal that Mercor asked for production work like "4D physics scenes with camera data, depth and motion/point tracking" -- the kind of material that is specific to an industry and would be very difficult for the average person to get their hands on. It'll probably be difficult for Mercor to get their hands on, too. As the WSJ pointed out, a lot of what the AI training data company is asking for likely belongs to the employer for whom the work was initially done. The employees and contractors who have worked on these types of domain-specific projects are usually subject to any number of contracts that prevent them from sharing information related to their work. Much of it is likely covered by intellectual property laws, and the workers themselves are often made to sign confidentiality agreements. While the company said in a statement to the Journal that Mercor "does not buy intellectual property," the outlet also said that messages sent by Mercor to employers regarding their previous work did include the phrase "looking to purchase." Mercor could plausibly claim that it isn't specifically seeking IP in these requests, but it does seem like an inevitable outcome of such purchases. -snip- Read more: https://gizmodo.com/ai-training-data-giant-mercor-is-reportedly-looking-to-buy-the-work-you-did-at-your-old-job-2000742263

Mercor
Democratic Underground19d ago
Read update
AI Training Data Giant Mercor Is Reportedly Looking to Buy the Work You Did at Your Old Job

AI Training Data Giant Mercor Is Reportedly Looking to Buy the Work You Did at Your Old Job

If you feel like your previous employer didn't properly compensate you, there might be a way to cash in on that workâ€"though it seems legally (and, depending on how you feel about artificial intelligence, morally) dubious. According to a report from the Wall Street Journal, AI training data giant Mercor is offering people payment in exchange for selling their prior work materials. Per the report, Mercor has been poking around a number of industries, including the entertainment space, and asking professionals if they'd be willing to sell stuff from previous jobs. Visual effects artists told the Journal that Mercor asked for production work like “4D physics scenes with camera data, depth and motion/point tracking"â€"the kind of material that is specific to an industry and would be very difficult for the average person to get their hands on. It'll probably be difficult for Mercor to get their hands on, too. As the WSJ pointed out, a lot of what the AI training data company is asking for likely belongs to the employer for whom the work was initially done. The employees and contractors who have worked on these types of domain-specific projects are usually subject to any number of contracts that prevent them from sharing information related to their work. Much of it is likely covered by intellectual property laws, and the workers themselves are often made to sign confidentiality agreements. While the company said in a statement to the Journal that Mercor "does not buy intellectual property," the outlet also said that messages sent by Mercor to employers regarding their previous work did include the phrase "looking to purchase." Mercor could plausibly claim that it isn't specifically seeking IP in these requests, but it does seem like an inevitable outcome of such purchases. Mercor has made a name for itself by shelling out for domain expertise, paying people (often ones who have lost work) with specific job and industry knowledge to train AI models. But anyone considering trying to cash in on some prior work material on the down low should probably proceed with caution if they're expecting any protection from Mercor. The company just recently suffered what seems to be a massive data breach, with as much as 4TB worth of sensitive data falling into the hands of hackers. According to the group that claimed responsibility for the breach, the stolen data includes candidate profiles, personally identifiable information, and employer data. Pretty much the exact kind of stuff you wouldn't want to be made public if you were slyly selling protected material.

Mercor
Gizmodo19d ago
Read update
AI Training Data Giant Mercor Is Reportedly Looking to Buy the Work You Did at Your Old Job

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Meta has paused all its work with the data contracting firm Mercor while it investigates a major security breach that impacted the startup, two sources confirmed to WIRED. The pause is indefinite, the sources said. Other major AI labs are also reevaluating their work with Mercor as they assess the scope of the incident, according to people familiar with the matter. Mercor is one of a few firms that OpenAI, Anthropic, and other AI labs rely on to generate training data for their models. The company hires massive networks of human contractors to generate bespoke, proprietary datasets for these labs, which are typically kept highly secret as they're a core ingredient in the recipe to generate valuable AI models that power products like ChatGPT and Claude Code. AI labs are sensitive about this data because it can reveal to competitors -- including other AI labs in America and China -- key details about the ways they train AI models. It's unclear at this time whether the data exposed in Mercor's breach would meaningfully help a competitor. While OpenAI has not stopped its current projects with Mercor, it is investigating the startup's security incident to see how its proprietary training data may have been exposed, a spokesperson for the company confirmed to WIRED. The spokesperson says that the incident in no way affects OpenAI user data, however. Anthropic did not immediately respond to WIRED's request for comment. Mercor confirmed the attack in an email to staff on March 31. "There was a recent security incident that affected our systems along with thousands of other organizations worldwide," the company wrote. A Mercor employee echoed these points in a message to contractors on Thursday, WIRED has learned. Contractors who were staffed on Meta projects cannot log hours until -- and if -- the project resumes, meaning they could functionally be out of work, a source familiar claims. The company is working to find additional projects for those impacted, according to internal conversations viewed by WIRED. Mercor contractors were not told exactly why their Meta projects were being paused. In a Slack channel related to the Chordus initiative -- a Meta-specific project to teach AI models to use multiple internet sources to verify their responses to user queries -- a project lead told staff that Mercor was "currently reassessing the project scope." An attacker known as TeamPCP appears to have recently compromised two versions of the AI API tool LiteLLM. The breach exposed companies and services that incorporate LiteLLM and installed the tainted updates. There could be thousands of victims, including other major AI companies, but the breach at Mercor illustrates the sensitivity of the compromised data. Mercor and its competitors -- such as Surge, Handshake, Turing, Labelbox, and Scale AI -- have developed a reputation for being incredibly secretive about the services they offer to major AI labs. It's rare to see the CEOs of these firms speaking publicly about the specific work they offer, and they internally use codenames to describe their projects. Adding to the confusion around the hack, a group going by the well-known Lapsus$ name claimed this week that it had breached Mercor. In a Telegram account and on a BreachForums clone, the actor offered to sell an array of alleged Mercor data, including a 200-plus GB database, nearly 1 TB of source code, and 3 TBs of video and other information. But researchers say that many cybercriminal groups now periodically take up the Lapsus$ name and that Mercor's confirmation of the LiteLLM connection means that the attacker is likely TeamPCP or an actor connected to the group. TeamPCP appears to have compromised the two LiteLLM updates as part of an even larger supply chain hacking spree in recent months that has been gaining momentum, catapulting TeamPCP to prominence. And while launching data extortion attacks and working with ransomware groups, such as the group known as Vect, TeamPCP has also strayed into political territory, spreading a data wiping worm known as "CanisterWorm" through vulnerable cloud instances with Farsi as their default language or clocks set to Iran's time zone. "TeamPCP is definitely financially motivated," says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. "There might be some geopolitical stuff as well, but it's hard to determine what's real and what's bluster, especially with a group this new." Looking at the dark web posts of the alleged Mercor data, Liska adds, "There is absolutely nothing that connects this to the original Lapsus$."

MercorAnthropic
Wired19d ago
Read update
Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

AI Firm Mercor Confirms Breach as Hackers Claim 4TB of Stolen Data - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Mercor
IT Security News - cybersecurity, infosecurity news20d ago
Read update
AI Firm Mercor Confirms Breach as Hackers Claim 4TB of Stolen Data - IT Security News

AI recruiting firm Mercor hit by data breach | News.az

AI recruiting company Mercor revealed it was affected by the recent LiteLLM supply chain attack, in which hackers claimed to have stolen 4 terabytes of data. The incident, which occurred on March 27, stemmed from a Trivy supply chain compromise a week earlier, News.Az reports, citing foreign media. LiteLLM reported that the breach originated from a compromised maintainer's credentials used in their CI/CD security scanning workflow. The hacking group TeamPCP released two malicious LiteLLM PyPI package versions -- 1.82.7 and 1.82.8 -- available for about 40 minutes. These packages were likely automatically downloaded by thousands of organizations, including Mercor, due to LiteLLM's presence in an estimated 36% of cloud environments. Mercor stated on Wednesday, "We recently identified that we were one of thousands of companies impacted by a supply chain attack involving LiteLLM," confirming its exposure to the breach. "Our security team moved promptly to contain and remediate the incident. We are conducting a thorough investigation supported by leading third-party forensics experts," Mercor added. While the company has not shared details on the impact, the Lapsus$ extortion group listed Mercor on its leak site on Monday, claiming the theft of over 4TB of data. Lapsus$ is auctioning the information, which allegedly includes candidate profiles, personally identifiable information, employer data, user accounts and credentials, video interviews, proprietary information, source code, keys and secrets, and TailScale VPN data.

Mercor
News.az20d ago
Read update
AI recruiting firm Mercor hit by data breach | News.az
Showing 1 - 20 of 27 articles