Original Reddit post

After a few days working with the opus [1m] model after ONLY using Sonnet (with the 200k token window) I am actually suprised at how different my experience with Claude is. It just doesn’t compact. I think I may be helping my situation because I’ve had to focus on optimizing token use so much. Maybe that’s paying off now. But I tasked it with creating a huge plan for a new set of features, then had it build it overnight, and continued to tinker with implementation this morning. It’s sitting here with 37% of available context used. I didn’t expect to be surprised but I legitimately am. submitted by /u/cosmicdreams

Originally posted by u/cosmicdreams on r/ClaudeCode