Claude Code Glitch Explained: Anthropic Fixes Issues After User Backlash
Market Updates

Claude Code Glitch Explained: Anthropic Fixes Issues After User Backlash

The Hans India58m ago

Anthropic identifies three technical changes behind Claude Code's perceived decline, restores performance after widespread developer complaints and detailed investigation.

After weeks of mounting frustration among developers, Anthropic has addressed concerns that its AI coding assistant, Claude Code, had been losing its edge. Users across platforms like X and Reddit reported that the tool -- once praised for generating large volumes of reliable code from minimal prompts -- had started behaving unpredictably and, in some cases, inefficiently.

The complaints were consistent and widespread. Some users questioned whether the system was "broken," while others described it as unreliable or inconsistent. One particularly striking Reddit comment compared the AI to "a lazy human employee," adding: "It's like an employee that goes behind your back, only does what it feels like to do at that moment, jumps into other areas when specifically asked not to."

Anthropic has now responded, confirming that the perceived drop in performance was not intentional. "We never intentionally degrade our models," the company said, adding that a detailed internal investigation uncovered three separate issues that collectively impacted user experience. A fix addressing all three problems was rolled out on April 20, along with a reset of user limits to restore full access.

According to the company, the first issue dates back to March 4, when the default reasoning setting for Claude Code was changed from "high" to "medium." This adjustment was intended to speed up response times, but it had an unintended consequence. Users, Anthropic noted, preferred more accurate and thoughtful outputs over faster ones. The company acknowledged this mismatch, stating that users said "they'd prefer to default to higher intelligence and opt into lower effort for simple tasks."

The second problem emerged on March 26 and involved how Claude handled memory in inactive sessions. The system was designed to clear older context after an hour of inactivity. However, a bug caused this reset to repeat continuously in every interaction instead of happening just once. As Anthropic explained, "a bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive."

The third issue surfaced on April 16 and was linked to a reduction in response verbosity. While the intention was to make answers more concise, Anthropic admitted that "in combination with other prompt changes, it hurt coding quality."

Together, these changes created what Anthropic described as a "broad and inconsistent degradation," since each issue affected different users at different times. Importantly, the company clarified that its API and core inference systems were not impacted -- only tools like Claude Code, the Claude Agent SDK, and Claude Cowork were affected.

Boris Cherny, head of Claude Code, emphasized the complexity of the investigation, stating, "We take these reports incredibly seriously. In my time on the team, this has probably been the most complex investigation we've had."

With the fixes now in place, Anthropic says performance should be back to expected levels, aiming to regain developer trust and confidence in its AI coding platform.

Originally published by The Hans India

Read original source →
Anthropic