I’ve really, really been enjoying this Opus 4.6 Thinking One Million Context. Obviously, Opus has kind of been the best coding model for a while now, and just the One Million Context has just been a game changer for me because I find myself not having to repeat features that I work on. I find that a lot of the features that I work on end up actually sitting at around 300,000 tokens to 250,000 tokens. In the past, that was just above the 200,000 token limit, meaning my chats would get summarized and a lot of context would be missing. The LLM would literally start hallucinating on what I wanted to do next. That’s not even counting when I’m working on gigantic features, which might be closer to 400,000 tokens. The truth is, the One Million Context window is kind of ridiculous for most use cases. The performance degrades so much at that point that it’s really unusable. From my use cases, getting to that 250,000 to 300,000, and sometimes 320,000 Context or Context window, has been a game changer for my startup and the features that we build for our users, helping them achieve their goals. I’ve been seeing a lot of posts around sonnet 4.6 and Opus 4.6, but I haven’t really seen a lot of posts about people talking about the One Million Context window and how useful it’s been for them. How has your guys’s experience been with it submitted by /u/InevitableSense7507
Originally posted by u/InevitableSense7507 on r/ClaudeCode
