Original Reddit post

I saw some folks suggesting that downgrading to v2.1.74 fixes the usage limit bug (e.g. in this post ), so I ran a controlled test to check. Short answer: it doesn’t, and the longer answer: the results are worth sharing regardless. The setup I waited for my session limit to hit 0%, then ran: The exact same prompt Against the exact same codebase With the exact same Claude setup (CLAUDE.md, plugins, skills, rules) Using the same model: Opus 4.6 1M, high reasoning Tested on v2.1.83 (latest) first, then v2.1.74 (“stable”). I’m on Max 5x , and both runs happened during the advertised 2x usage period . Results So yeah, nearly identical results. What was the task? A rendering bug: a 0.5px div with a linear gradient becakground (acting as as a border) wasn’t showing up in Chrome’s PDF print dialog at certain horizontal positions. v2.1.83 invoked the superpowers:systematic-debugging skill; v2.1.74 didn’t, Despite the difference, both sessions had a very similar reasoning and debugging process, Both arrived at the same conclusion and implemented the same fix. Which was awfully wrong. (I ended up solving the bug myself in the meantime; took me about 5 or 6 minutes :D) “The uncomfortable part” (a.k.a tell me you run a post through AI without telling me you run it through AI) During the 2x usage period , on the Max 5x plan , Opus 4.6 consumed ~118–119K tokens and pushed the session limit by 6–7%. That’s it. And it even got the answer wrong!! I should note that the token counts above are orchestrator-only. As subscribers (not API users), we currently have no way to measure total tokens across all sub-agents in a session AFAIK. That being said, I saw no sub-agents being invoked in both sessions I tested. So yeah, the version downgrade has turned out not to be the fix I was hoping for. And, separately, the usage limits on this tier still feel extremely tight for what’s supposed to be a 2x period. submitted by /u/toiletgranny

Originally posted by u/toiletgranny on r/ClaudeCode