'Undercover' AI agent, Voice Mode and other features: Claude code source code leak reveals projects Anthropic is working on
Company Updates

'Undercover' AI agent, Voice Mode and other features: Claude code source code leak reveals projects Anthropic is working on

The Times of India19d ago

Anthropic's latest Claude Code leak has reportedly revealed details about several features the company may be developing. This includes an "Undercover" AI agent mode, Voice Mode, and background systems designed to make the assistant more proactive and persistent. The Claude Code source code leak, spanning more than 512,000 lines across 2,000 files, has drawn attention from developers and researchers examining hidden, disabled, or incomplete features, a report claims.According to a report by Ars Technica, these references offer an early look at how Anthropic could expand Claude's capabilities, particularly in memory, automation, and collaboration. While not all features appear to be fully implemented, the code suggests ongoing work on tools that could reshape how users interact with AI systems in development environments.The leaked code mentions a system called Kairos, a persistent "daemon" that continues running even after the Claude Code terminal is closed. It uses prompts that appear occasionally to check whether new actions are needed, as well as a "PROACTIVE" flag for "surfacing something the user hasn't asked for and needs to see now." Kairos is also linked to a file-based memory system designed to maintain continuity across sessions, helping the AI build "a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you."The leaked code includes links to an AutoDream system to help track this memory over time. When a user is idle or ends a session, Claude Code is told, "You are performing a dream -- a reflective pass over your memory files." This process involves scanning transcripts for "new information worth persisting," removing "near-duplicates" and "contradictions," and trimming outdated or overly detailed entries. It also directs the system to monitor "existing memories that drifted," with the aim to "synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly."Another feature, called "Undercover mode," appears to allow contributions to public open source repositories without revealing that they originate from an AI system. The prompts tied to this mode emphasise protecting "internal model codenames, project names, or other Anthropic-internal information." They also instruct that commits should "never include... the phrase 'Claude Code' or any mention that you are an AI," and avoid attribution like "co-Authored-By lines or any other attribution."The codebase also includes a lighter feature called Buddy. This feature has been described as a "separate watcher" that "sits beside the user's input box and occasionally comments in a speech bubble." These companions are small ASCII-style animations that can take on different shapes. Internal notes say that it was supposed to be released in a small number of places first, then more widely.Other features referenced in the leak include an UltraPlan mode that allows Claude to "draft an advanced plan you can edit and approve," with execution times ranging from 10 to 30 minutes. There is also mention of a Voice Mode for direct spoken interaction, a Bridge mode enabling remote sessions controlled from external devices, and a Coordinator tool designed to "orchestrate software engineering tasks across multiple workers" using parallel processes and WebSocket communication.

Originally published by The Times of India

Read original source →
Anthropic