Original Reddit post

I’ve been on Max for a couple months now and I swear something shifted. Regular Opus 4.6 responses feel fine. Like, competent but flat. The kind of output where you read it and go ok that works but it doesn’t actually surprise you or catch edge cases the way it used to. Then I toggle ultrathink on the same prompt and suddenly it’s doing the thing again. Actually reasoning through the problem, catching contradictions, pushing back on bad assumptions. The stuff that made me switch from GPT in the first place. I spent like two hours last Tuesday going back and forth on a system design question. Regular mode kept giving me these safe, generic answers. Turned on ultrathink and it immediately pointed out a race condition I hadn’t even considered. Same model. Same prompt. I even copied the exact text. And here’s what bugs me. Ultrathink eats through your limits WAY faster. So basically the quality that used to be the default now costs 3x the tokens? I’m not saying it’s intentional but the end result is the same. You either accept worse output or burn through your plan in half the time. I keep going back to check my old conversations from like February and the regular responses were genuinely better than what I get now without ultrathink. Maybe I’m crazy. Maybe my prompts got lazy. But a few people in the Discord were saying the same thing so I don’t think it’s just me. Anyone else noticing this or am I just losing it? submitted by /u/Ambitious-Garbage-73

Originally posted by u/Ambitious-Garbage-73 on r/ClaudeCode