
Update exposes Claude Code harness, spurring cloning and security fears
Anthropic just got a crash course in how fast its own AI buzz can backfire. The company behind Claude Code, an AI coding assistant with "viral popularity" among developers, is scrambling after a Tuesday update briefly exposed the internal instructions that tell the tool how to behave -- its so-called "harness" -- on GitHub, per the Wall Street Journal. By Wednesday, Anthropic had pushed the platform to remove more than 8,000 copies and derivatives via copyright claims. The company says only some internal source code leaked due to "human error," with model "weights" still protected. Still, the exposed material, reportedly some 500,000 lines of code, offers rivals and hobbyists a detailed blueprint for recreating Claude Code's behavior -- and gives potential hackers new angles to probe.