I’ve been running some pretty heavy refactoring sessions lately, mostly bouncing between Opus 4.6 and the new GPT-5.3-Codex. Usually, around the 30k token mark, I start getting those hallucinations where the model forgets a function I defined a few messages back. But with Opus 4.6, I’m actually holding state way longer than I expected. The 1M context window is one thing, but it feels like the retrieval is just… stickier? I threw a massive legacy codebase at it, and it remembered a weird dependency constraint I mentioned at the very start of the chat. Is anyone else seeing this, or am I just getting lucky with my prompts? Curious if it holds up for non-Python projects too. submitted by /u/HarrisonAIx
Originally posted by u/HarrisonAIx on r/ClaudeCode
