I love CC. Been using it since Mar 2025 and have built a US state government used AI Service and website deployed two months ago with nice passive income with world travel ideas. Big fan of 1M context - been using that with GPT-codex to do multi-agent peer reviews of CC design specs & code. Ever since I switched to Opus 4.6 1M - I get this nagging feeling it’s just not understanding me as well. I even keep my context low and /memory-session-save and /clear it at around 250K since I’m used to doing that with CC and great results. I use a tight methodology with lots of iteration and time on specs, reviews and small code bursts for tight feature/fix cycles. Has anyone else noticed that Opus 4.6 just has a harder time figuring out what you’re asking in the same prompts that would work before? For example: I used to be able to just say “QC code and then test it” was fine, but now Opus asks me “what area should we QC?” … I’m like “duh the PR we’ve been working on for last two hours” and then it proceeds. It seems to have harder time initiating skills as well. Must be just me - I’m off my meds this week - LOL. Is anyone else seeing this quality difference? Just wondering. submitted by /u/OmniZenTech
Originally posted by u/OmniZenTech on r/ClaudeCode
