- AMD’s AI Director Critiques Claude Code’s Efficacy Post-February 2026 Update
- Claude Code Found “Unreliable” for Complex Tasks After Extensive Evaluation
- Anthropic Claims Adjustments May Allow Higher Performance for Teams and Enterprises
Stella Laurenzo, AMD’s Director of AI, has asserted that the Claude Code application has experienced a decline in effectiveness since February 2026, emphasizing its growing unreliability for undertaking complex engineering tasks.
Laurenzo’s observations draw from a meticulous analysis involving over 6,800 programming sessions, approximately 235,000 tool interactions, and close to 18,000 reasoning evaluations.
“Every senior engineer on my team has relayed akin experiences,” Laurenzo remarked, observing a troubling increase in stop-hook violations—instances where Claude prematurely ceased engagement, evaded responsibility, or mandated superfluous authorizations. Such violations surged from zero in early March to nearly ten per day thereafter.
AMD Executive Raises Alarm Over Claude Code’s Diminishing Capabilities
In a recent GitHub post, user stellaraccident (the moniker of Laurenzo) highlighted a significant correlation between the deployment of the “redact-thinking” feature (redact-thinking-2026-02-12) and a noticeable drop in performance regarding intricate tasks.
Laurenzo articulated that prolonged reasoning processes are essential for delivering robust engineering outputs.
Furthermore, she noted a paradigm shift from a research-first approach to an edit-first mentality, yielding poorer code quality, diminished adherence to established conventions, and an overall decrease in reliability during extended sessions.
In response to these findings, Anthropic has issued a comprehensive explanation. Boris, a representative of Claude Code, clarified that the redact-thinking feature merely conceals reasoning from the user interface without actually impairing the reasoning capabilities.
The company has also rolled out adaptive thinking with Opus 4.6, where the model dynamically calibrates the duration of its reasoning to enhance performance and efficiency.
“Some users favor prolonged processing time, even if it entails additional token usage,” Boris remarked. “To amplify intelligence further, adjust the effort to high via `/effort` or in your settings.json.”
As of now, a medium effort setting (effort=85) serves as the default for users, while Anthropic pledges to explore higher effort configurations specifically for Teams and Enterprises, ensuring these users can enjoy the advantages of extended cognitive processing, albeit with augmented token consumption and latency.

“I appreciate the profound analysis and dedication involved in this assessment,” Boris credited, acknowledging Laurenzo’s contributions in scrutinizing Claude Code’s performance.
Source link: Techradar.com.






