Anthropic Issues 8,000 Copyright Takedowns to Scrub Claude Code Leak
Company Updates

Anthropic Issues 8,000 Copyright Takedowns to Scrub Claude Code Leak

PCMag UK4/1/2026

Anthropic is scrambling to contain the leak of its popular AI tool, Claude Code, by issuing over 8,000 copyright takedown notices.

Anthropic is trying to scrub the leaked computer code from GitHub, which reports processing copyright takedown notices for an "entire network of 8.1K repositories," which are pages that store computer code.

The company confirmed the leak is real after a user on Tuesday morning spotted Anthropic accidentally shipping the source code in a 59.8MB file in the since-deleted 2.1.88 release of Claude Code. The discovery sparked a flood of interest, leading the leaked code to proliferate across thousands of GitHub pages, a popular platform for hosting software projects.

An Anthropic spokesperson told PCMag that the "Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

However, programmers have been trying to find ways to keep leaked source code up. This has included using AI to rewrite the code into different scripting languages such as Python and Bash. The intent is to substantially alter the code and thus dodge Anthropic's copyright takedowns, which have been removing the GitHub repositories over infringement.

"The source is, for all practical purposes, permanently public," wrote Systima, a consultancy focused on AI agents. However, Systima noted: "The leak did not expose model weights, training data, or API infrastructure," for Claude Code.

Still, the incident might be a major blow to Anthropic, as it pulls back the curtains on its flagship product, Claude Code, a valuable resource for rival companies to improve their own AI coding tools. Developers have been digging through the file and found that it reveals several features, including a technique to prevent bad actors from cloning Claude Code, and a mode to strip evidence that an AI was behind the output.

The leak also mentions "KAIROS," which appears to be an unreleased autonomous agent mode, and a Tamagotchi-style "buddy" companion system, although this might be an April Fools' joke, according to software engineer Alex Kim.

Originally published by PCMag UK

Read original source →
Anthropic