
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.
The $10 billion company, which has worked with the likes of Meta, has been served with at least seven class-action lawsuits in the wake of the breach, The Wall Street Journal (WSJ) reported Thursday (April 23).
The suits allege the breach exposed Mercor contractor information that included job interview recordings, facial biometric data and screenshots of employees' computers. One suit, the report added, claims Mercor collected applicant-vetting data, such as background checks, which it shared with partners, in violation of federal regulations.
According to plaintiffs, the company's practices include monitoring its contractors' computers and sharing that data with clients, using recorded candidate interviews to train AI models, and training client models on materials potentially owned by other companies.
"We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," Mercor said in a statement to the WSJ.
"We take the privacy of our customers, contractors, employees and those we interview very seriously, and we comply with all relevant laws and regulations," the statement added, noting that the startup acted quickly to remedy the breach, which affected several other companies.
"We are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings," it said.
The WSJ report added a comment from a Meta spokesperson that the company has paused its work with Mercor and is investigating the breach.
PYMNTS wrote earlier this week about the "new consensus" being formed around the "data problem" beneath the race to deploy agentic AI.
"More autonomous AI systems will raise the stakes for how data is created, governed, accessed and protected," that report said. "Synthetic data needs clearer standards. Real-world data needs tighter minimization. And the systems tying it all together need a stronger foundation of trust, security and control."
Also this week, PYMNTS examined the changing cybersecurity landscape, arguing that while few of this year's high-profile incidents can be called "AI attacks," it is still hard to ignore the corresponding uptick in AI-powered offensive capability.
"Anthropic's Claude Mythos Preview, for example, has reportedly demonstrated the ability to autonomously discover and exploit vulnerabilities across major operating systems and web browsers, including decades-old bugs in widely trusted systems," PYMNTS wrote.