Original Reddit post

I have a Max subscription and I ask Claude to display its system_effort level at the bottom of every response. For weeks it consistently showed 85 (which maps to “high”). Today I noticed it dropped to 25 — that’s basically “low.” This matters way more than most people realize. The effort parameter doesn’t just control how hard Claude thinks. It controls how many tool calls it makes, whether it follows system prompt instructions, and how thoroughly it cross-references information. A research team at FutureSearch documented that at effort=low, Opus 4.6 straight up ignored their system prompt instructions about research methodology. So if you’ve been feeling like Claude has gotten dumber or lazier recently — shorter responses, skipping steps you asked for, not using tools when it should — this might be why. You’re not crazy. The model might literally be running at reduced capacity without telling you. The thing is, rate limits and session limits are one thing. Anthropic has been transparent about those. But silently reducing the quality of each response while charging the same price is a completely different issue. With rate limits, at least you know you’re being limited. With effort degradation, you think you’re getting full-power Claude but you’re not. Can anyone else check this? Ask Claude to report its reasoning effort level and see what you get. Curious if this is happening across the board or just during peak hours. submitted by /u/DistributionMean257

Originally posted by u/DistributionMean257 on r/ClaudeCode