Original Reddit post

I just hit my session limit with Opus 4.7 1M context model. I was surprised as I had never come close to rate limit when I was using non-1M-context models. The token usage exploded as a session context accumulated. Should I downgrade a model to non-1M-context models or should I auto-compact more frequently to avoid rate limit? submitted by /u/Jezsung

Originally posted by u/Jezsung on r/ClaudeCode