Original Reddit post

A very detailed analysis of performance degradation in Opus was posted by someone who is the senior director of AI at AMD in their github here: https://github.com/anthropics/claude-code/issues/42796 Several high visibility articles and posts were done about this: https://news.ycombinator.com/item?id=47660925 https://www.pcgamer.com/software/ai/amds-senior-director-of-ai-thinks-claude-has-regressed-and-that-it-cannot-be-trusted-to-perform-complex-engineering/ https://www.theregister.com/2026/04/06/anthropic_claude_code_dumber_lazier_amd_ai_director/ Staff from Anthropic came back with a reply: https://github.com/anthropics/claude-code/issues/42796#issuecomment-4194007103 which was basically set " CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING " to 1 Anthropic’s argument is they had degraded performance with adaptive thinking because Opus was costing too many tokens for people, eating up their quota too fast. However, as for the title, while they can’t be 100% sure and as far as the issue OP can tell, they had already tried this and it didn’t change anything. What they want, is a baseline - ‘this is the best we have’ option so they don’t run into this going forward. Even if it costs more. Some possibilities: Most cynical: Anthropic (and other labs) dial up performance early to grab market share, and dial it down before the next release to lower costs and show a bigger jump to the next model. Cynical, but fair: AMD is mostly trying to pressure these companies into competing harder because they are concerned about outsourcing their development to one company. More generous, but only a little: Anthropic realized that Opus was able to find critical vulns and had to dial down its capability. Even still, it seems deceptive. AMD didn’t try the new suggestions hard enough Ofc, likely a mixture of all of the above. At the very least, rug pulling changes that don’t make clear the introduced regression in performance is very bad as it introduces significant workload, even if it optimistically meant to lower costs for users. submitted by /u/kaggleqrdl

Originally posted by u/kaggleqrdl on r/ArtificialInteligence