Anthropic accidentally released source code for its Claude AI agent. This release has developers examining the code for insights into future platform evolution. Experts are also raising concerns about potential security vulnerabilities. This incident follows a previous accidental public release of thousands of files, including details about a powerful new model.
Anthropic inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans.
"Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
The leak of basic source code - the second slip-up in just a week - triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X that they were poring through the details to try and figure out how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure.