Anthropic announced that the recent quality issues with Claude Code were primarily due to modifications in reasoning settings, memory handling, and system prompts. The postmortem revealed that a lowered default reasoning effort, a cache bug causing repeated wiping of reasoning traces, and system prompts limiting output verbosity were the root causes. In response, Anthropic has reverted the reasoning settings to their previous state, fixed the memory bug, and made adjustments to system prompts to enhance output quality. Developers had previously noted shallow reasoning and incomplete tasks, which reflected the impact of these issues before the postmortem.
Anthropic: Anthropic is an AI research company building safe and reliable systems, best known for the Claude family of models that power tools for coding, design, and economic analysis. Recently, they published an engineering postmortem explaining quality drops in Claude Code due to unintended changes in reasoning settings, memory handling, and prompts. The company has since resolved these issues and launched complementary products like Claude Design for visual collaboration.
Claude Code: Claude Code is Anthropic’s agentic coding assistant that operates in the terminal, reading entire codebases to build features, fix bugs, run tests, and deliver commits. It experienced recent performance degradation from a reasoning effort downgrade, a memory cache bug erasing session history, and verbosity-reducing prompts. Anthropic confirmed these bugs in a postmortem and restored full capabilities following developer feedback.
Root Causes: Degradation traced to lowered default reasoning effort, repeated wiping of reasoning traces by a cache bug, and system prompts curbing verbose outputs.
Fixes Applied: Anthropic reverted the reasoning downgrade, patched the memory bug, and adjusted prompts to eliminate quality impacts.
User Observations: Developers reported tendencies toward shallow reasoning, skipped reviews, and incomplete tasks before the postmortem.
