Original Reddit post

https://news.ycombinator.com/item?id=47660925 https://github.com/anthropics/claude-code/issues/42796 This GitHub issue is a full evidence chain for Claude Code quality decline after the February changes. The author went through logs, metrics, and behavior patterns instead of just throwing out opinions. The key number is brutal. The issue says estimated thinking depth dropped about 67% by late February. It also points to visible changes in behavior, like less reading before editing and a sharp rise in stop hook violations. This hit me hard because I have been dealing with the same problem for a while. I kept saying something was clearly wrong, but the usual reply was that it was my usage or my prompts. Then someone finally did the hard work and laid out the evidence properly. Seeing that was frustrating, but also validating. Anthropic should spend less energy making this kind of decline harder to see and more energy actually fixing the model. submitted by /u/takeurhand

Originally posted by u/takeurhand on r/ClaudeCode