Original Reddit post

For weeks, any mention of Claude’s performance regression was met with “you’re just vibe-coding” or “prompt better.” It turns out it wasn’t a “vibe shift”! it was a literal technical failure. Between the reasoning effort being throttled to “Medium,” the cache bug wiping context history, and the tool-call word limits, the model was objectively crippled. It’s a massive problem that we’re building production workflows on a black box that can undergo significant architectural changes or logic regressions without a changelog. We aren’t “hallucinating” the nerf when the model suddenly fails at basic logic it handled perfectly 24 hours prior. The fact that it took this much community noise for them to revert the “optimizations” is the real issue. Trusting the tool is hard when “optimizing for latency” means “breaking the reasoning engine.” Usage limits are back to normal, but the trust definitely isn’t. Stop telling people they’re “vibe-coding” when they’re actually just spotting a 40% regression in real-time. submitted by /u/HussainBiedouh

Originally posted by u/HussainBiedouh on r/ClaudeCode