Original Reddit post

so a developer noticed something was off with Claude Code back in February, it had stopped actually trying to get things right and was just rushing to finish, so he did what Anthropic wouldn’t and ran the numbers himself 6,852 Claude Code sessions, 17,871 thinking blocks analyzed reasoning depth dropped 67%, Claude went from reading a file 6.6 times before editing it to just 2, one in three edits were made without reading the file at all, the word “simplest” appeared 642% more in outputs, the model wasnt just thinking less it was literally telling you it was taking shortcuts. Anthropic said nothing for weeks until the developer posted the data publicly on GitHub, then Boris Cherny head of Claude Code appeared on the thread that same day, his explanation was “adaptive thinking” was supposed to save tokens on easy tasks but it was throttling hard problems too, there was also a bug where even when users set effort to “high” thinking was being zeroed out on certain turns. the issue was closed over user objections, 72 thumbs up on the comment asking why it was closed. but heres the part that really got me the leaked source code shows a check for a user type called “ant”, Anthropic employees get routed to a different instruction set that includes “verify work actually works before claiming done”, paying users dont get that instruction one price two Claudes I felt this firsthand because I’ve been using Claude heavily for a creative workflow where I write scene descriptions and feed them into AI video tools like Magic Hour, Kling and Seedance to generate short clips for client projects, back in January Claude would give me these incredibly detailed shot breakdowns with camera angles and lighting notes and mood references that translated beautifully into the video generators, by mid February the same prompts were coming back as bare minimum one liners like a person walks down a street at sunset with zero detail, I literally thought my prompts were broken so I spent days rewriting them before I saw this GitHub thread and realized it wasnt me it was the model. the quality difference downstream was brutal because these video tools are only as good as what you feed them, detailed prompts with specific lighting and composition notes give you cinematic output, lazy prompts give you generic garbage, Claude going from thoughtful to “simplest possible answer” basically broke my entire production pipeline overnight. this is the company that lectures the world about AI safety and transparency and they couldnt be transparent about making their own model worse for paying customers while keeping the good version for themselves(although i still love claude) submitted by /u/DangerousFlower8634

Originally posted by u/DangerousFlower8634 on r/ClaudeCode