
Anthropic, the high-flying artificial intelligence (AI) startup valued at $380 billion, is racing to mitigate the fallout after a human error led to public exposure of internal source code for Claude Code, its viral flagship programming assistant.
The leak, which occurred Tuesday during a routine update, has provided competitors and hackers alike with a rare, unfiltered look at the proprietary harness Anthropic uses to transform its AI models into sophisticated autonomous agents.
The company confirmed the lapse was not a malicious breach but a "release packaging issue." A published package for the tool included a source map that pointed directly to an archive of the tool's TypeScript source code.
An Anthropic spokesperson said, "No sensitive customer data or credentials were involved or exposed. We're rolling out measures to prevent this from happening again."
Although customer data remained secure, the commercial damage is significant. By Wednesday morning, Anthropic had filed copyright takedown requests to scrub more than 8,000 copies and adaptations of the code from GitHub.
However, the cat-and-mouse game continues; some developers have already used other AI tools to rewrite the leaked functionality into different programming languages to bypass automated takedowns.
The leak has demystified the art behind Claude Code's success.
Programmers dissecting the files discovered several of Anthropic's secret techniques for "cajoling" AI into high-performance agents, including: "dreaming," a process where the model periodically reviews tasks to consolidate its memory; undercover mode, which are instructions for the AI to avoid identifying itself as an AI when publishing code; and "buddy," a Tamagotchi-style virtual pet hidden within the interface for user interaction.
The timing of the leak is particularly sensitive. Anthropic, founded by former OpenAI executives, has seen its run-rate revenue soar to over $2.5 billion in February. This success has sparked an arms race, with Google, OpenAI, and xAI all rushing to build competing agentic tools.
The incident marks the company's second data blunder in a single week, following a recent Fortune report that descriptions of upcoming models were found in a publicly accessible data cache.
Industry analysts warn that beyond the loss of trade secrets, the leak provides a roadmap for jailbreaking or exploiting the software. For a company built on a reputation for AI safety, the sight of its most valuable internal instructions being mirrored across Reddit and X (where one post reached 21 million views) is a stark reminder that even the most advanced AI is still vulnerable to simple human error.