Anthropic’s Claude Code has faced significant backlash from the chip industry after a critical assessment by Stella Laurenzo, AMD’s Senior Director of AI, revealed a drastic decline in the model’s performance. Analyzing 6,852 coding sessions from January to March 2026, Laurenzo found that the model’s thinking depth has collapsed by approximately 67% since February, leading to it reading less of the codebase before making edits and skipping files on about one-third of changes. This regression has resulted in increased API retry rates and higher costs, negatively impacting power users who have noticed a shift towards less reliable and more superficial outputs. The findings highlight the challenges centralized frontier models face in balancing scaling usage with maintaining quality, as operational pressures often lead to compromises on performance.

AMD: AMD (Advanced Micro Devices) is a semiconductor and computing infrastructure company that partners with AI model developers. AMD is relevant here because its Senior Director of AI conducted an independent technical audit of Claude Code’s performance across thousands of real-world coding sessions.
Anthropic: Anthropic is an AI safety company that develops Claude, a frontier large language model used for a wide range of tasks including coding assistance. The company is central to this news because Claude Code, its coding-focused product offering, is the subject of performance regression analysis conducted by AMD’s AI leadership.
Stella Laurenzo: Stella Laurenzo is Senior Director of AI at AMD. She is the author of the technical analysis documented in this news, having evaluated Claude Code’s performance regression through logged data from 6,852 coding sessions between January and March 2026.

Model Quality: Claude Code’s thinking depth collapsed approximately 67% since February 2026, with the model reading less of the codebase before making edits and skipping files on roughly one-third of changes.
Enterprise Impact: API retry rates increased as Claude Code quality declined, resulting in higher computational costs alongside reduced output reliability for power users and engineering workflows.
Infrastructure Pressures: Centralized frontier AI models face recurring tradeoffs between scaling adoption and maintaining output quality, with operational pressures potentially incentivizing cost-cutting over model performance.

Sources