
The breach was executed via a supply chain attack involving a poisoned version of the LiteLLM open-source library.
Meta has indefinitely paused its collaboration with Mercor, a San Francisco-based AI data startup valued at $10 billion, following a security breach that may have exposed proprietary training methodologies for large language models.
The incident has triggered broader investigations across the AI industry, as Mercor serves as a critical data vendor for several of the world's leading AI laboratories. These firms rely on Mercor to generate bespoke, proprietary datasets that are essential for training models such as ChatGPT and Claude Code.
The breach was executed via a supply chain attack involving a poisoned version of the LiteLLM open-source library. This attack allowed hackers to access sensitive information, including not only personal data but also the specific training blueprints used by AI labs to build their most powerful models.
Mercor confirmed the attack to its staff in an email dated March 31, 2026. In that communication, the company stated that the security incident affected their systems along with thousands of other organizations worldwide.
The exposure of these methodologies is particularly sensitive because training data and the processes used to curate it are often kept secret to prevent competitors -- including other labs in the U.S. And China -- from replicating their results.
While Meta has completely frozen its work with the startup, other major players are taking different approaches to the incident:
The suspension of work has had an immediate impact on the human workforce employed by Mercor. The startup utilizes a massive network of contractors -- including engineers, lawyers, doctors, bankers, and journalists -- to produce high-quality training data.
Contractors assigned to Meta projects have been informed that they cannot log hours until, or if, the project resumes, effectively leaving them without work.
The breach has resulted in a class action lawsuit affecting more than 40,000 people. The incident highlights the vulnerability of the AI supply chain, specifically the reliance on third-party vendors to handle the most closely guarded secrets of the industry.
Mercor was founded in 2023 by Brendan Foody, Adarsh Hiremath, and Surya Midha, three former teammates from the Bellarmine College Preparatory Speech and Debate team. Since its inception, the company has become a pivotal link in the AI economy by providing the high-quality, human-generated data required to refine large language models.
The current situation has created significant anxiety within the sector, as companies have invested billions of dollars into proprietary training methods that they intended to keep confidential.