Anthropic Brings Repeatable Routines to Claude Code, Turning AI Into a Persistent Engineering Teammate
Company Updates

Anthropic Brings Repeatable Routines to Claude Code, Turning AI Into a Persistent Engineering Teammate

WebProNews10d ago

For months, developers using Anthropic's Claude Code have operated in a familiar loop: open a terminal, type a natural-language instruction, watch the AI execute, then repeat. Every session started from scratch. Every workflow had to be re-explained. That's about to change.

Anthropic has introduced a feature called Routines to Claude Code, its command-line coding agent, giving developers the ability to define repeatable, multi-step workflows that the AI can execute on command. Think of it as saved procedures for an AI pair programmer -- scripts written in plain English rather than Bash, but with the flexibility to adapt to context each time they run.

The feature, first detailed by 9to5Mac, represents a quiet but significant shift in how AI coding assistants are designed. Rather than optimizing solely for one-off queries, Anthropic is building infrastructure for persistent, structured collaboration between developers and AI agents. The implications stretch well beyond convenience.

How Routines Work -- and Why They Matter

At its core, the Routines feature lets developers create markdown files that describe a sequence of steps Claude Code should follow. These files live inside a project's directory structure, making them version-controllable, shareable across teams, and portable between machines. A routine might instruct Claude to pull the latest changes from a Git branch, run a test suite, identify failing tests, propose fixes, and then open a pull request -- all triggered by a single command.

The syntax is deliberately simple. Anthropic opted for natural language with lightweight structural conventions rather than inventing a new DSL. This matters. It means any developer can write a routine without learning a new programming language, and any developer can read one and immediately understand what it does.

But here's where it gets interesting. Routines aren't rigid scripts. Claude Code interprets them with contextual awareness. If a step says "fix any type errors in the changed files," the AI determines which files changed, what the type errors are, and how to resolve them -- dynamically, each time. The routine provides the skeleton. Claude provides the intelligence.

This sits at an important intersection. Traditional automation tools like Makefiles, shell scripts, and CI/CD pipelines are deterministic: they do exactly what you tell them, every time. AI agents are flexible but unpredictable: they interpret intent but may drift. Routines attempt to bridge that gap -- giving developers the repeatability of automation with the adaptability of an AI agent.

For engineering teams managing large codebases, the appeal is obvious. Code review checklists, deployment preparation steps, onboarding procedures for new contributors, bug triage workflows -- all of these involve repetitive sequences of tasks that require judgment at each step. Routines formalize the sequence while delegating the judgment.

And they're composable. One routine can reference another. A "prepare release" routine might invoke a "run full test suite" routine, then a "update changelog" routine, then a "tag and push" routine. Nesting keeps individual routines focused and reusable.

Anthropic has also built in safeguards. Routines can include explicit checkpoints where Claude Code pauses and asks the developer for confirmation before proceeding. This is critical for high-stakes operations -- deploying to production, modifying database schemas, or making bulk changes across hundreds of files. The developer stays in the loop without having to babysit every step.

The feature ships as part of Claude Code's standard tooling, available to all subscribers on Anthropic's Max plan. No separate API costs for routine execution beyond normal token usage.

The Competitive Context: Coding Agents Are Becoming Workflow Engines

Anthropic isn't operating in a vacuum. The race to build the dominant AI coding assistant has intensified dramatically in 2026, and the competitive field is crowded.

GitHub Copilot, backed by Microsoft and OpenAI, remains the most widely adopted tool, with deep integration into VS Code and GitHub's broader platform. Copilot has been expanding its own agent capabilities, including the ability to handle multi-file edits and respond to GitHub Issues autonomously. Google's Gemini Code Assist has similarly pushed into agentic territory, with tight integration into Google Cloud's developer tools.

Then there are the startups. Cursor, the AI-native code editor, has attracted a devoted following among individual developers and small teams. Devin, from Cognition, positions itself as a fully autonomous software engineer. Poolside and Magic are building foundation models specifically for code generation.

What distinguishes Anthropic's approach with Routines is the emphasis on developer control. Where some competitors lean toward full autonomy -- "just tell the AI what you want and walk away" -- Anthropic is betting that professional developers want structured collaboration. They want to define the process. They want checkpoints. They want to version-control their AI workflows the same way they version-control their code.

This philosophy aligns with how Anthropic has positioned Claude more broadly: capable but controllable, powerful but transparent. The Routines feature is that philosophy made concrete in a developer tool.

It also reflects a growing recognition across the industry that the value of AI coding tools isn't just in generating code. It's in managing the entire software development workflow. Writing code is perhaps 30% of what developers actually do. The rest is reading code, reviewing code, debugging, testing, deploying, documenting, communicating. Tools that only help with generation miss most of the job.

Routines address this directly. A routine doesn't have to generate a single line of code. It could analyze a codebase for security vulnerabilities, summarize recent changes for a standup meeting, or audit dependency versions. The feature reframes Claude Code from "AI that writes code" to "AI that participates in engineering processes."

So where does this go next? The logical extension is integration with external systems. Routines that can interact with Jira, Slack, Linear, or PagerDuty. Routines triggered automatically by webhooks -- a new PR opens, and Claude runs a predefined review routine. Routines that execute on a schedule, like a nightly codebase health check.

Anthropic hasn't announced these capabilities yet. But the architecture of Routines -- markdown files, composable steps, checkpoint-based control flow -- is clearly designed to accommodate them. The foundation is being laid for something more ambitious than a coding assistant. It's the scaffolding for an AI-powered engineering operations layer.

For now, the immediate impact will be felt by teams already using Claude Code in their daily workflows. The ability to encode institutional knowledge -- "here's how we do deployments," "here's our code review checklist," "here's how we handle hotfixes" -- into executable, AI-powered routines is genuinely useful. It reduces onboarding friction. It enforces consistency. And it frees senior engineers from repeatedly explaining processes that can be documented once and executed indefinitely.

Whether Routines become a standard feature that every AI coding tool eventually copies, or a differentiator that pulls developers toward Claude Code specifically, depends on execution. The concept is sound. The implementation, based on early reports, is clean and intuitive. But adoption will hinge on reliability -- on whether developers trust Claude Code to follow their routines accurately, consistently, and without unexpected behavior.

Trust is the currency here. Not tokens.

Anthropic appears to understand that. The checkpoint system, the use of plain markdown, the decision to keep routines inside the project repository -- all of these are trust-building design choices. They give developers visibility and control. They make the AI's behavior auditable and reproducible.

For an industry that's spent the last three years oscillating between AI hype and AI skepticism, that's exactly the right instinct.

Originally published by WebProNews

Read original source →
Anthropic