Original Reddit post

It might seem counterintuitive that they’d release 1m token context and I’d be managing my context even more closely, but I think there’s method to the madness. Firstly, even if the performance dropoff at say 800-900k tokens is not significant - it still exists. And I assume we need to be even more careful with larger contexts for rampant token usage. The greatest benefit I’m seeing is being able to create and enact extremely large plans and feature implementations without needing to compact 4-5 times before it’s complete. My workflow is like this: Construct a plan, either with in-built Claude’s planning or OpenSpec (I use OpenSpec) /compact or /clear everything (this is similar to what Claude’s built-in planning does where it clears the context before it begins implementing a plan) Begin implementing the plan and watch Claude work it’s magic without ever needing to compact once, never losing context. Make any additional changes necessary while Claude is still fully on the same topic, with the same context Repeat I’ve been using the 1m token context model since it was released and I’ve never had an automatic compaction yet. It’s a thing of beauty to be able to get through significant feature implementations and then also build on top of them without needing to compact, repeat myself, get it to explore areas of the code again, etc. The compaction is a lot better than what it used to be, but this change almost exempts the automatic compaction entirely if you’re careful. submitted by /u/Kedaism

Originally posted by u/Kedaism on r/ClaudeCode