It’s been a rough month Claude has been my favorite coding model for a long time. At its best, it felt like having a brilliant, tireless collaborator. I could give it a vague idea, a rough direction, or a messy engineering problem, and it would come back with clean, production-ready code while I stayed focused on architecture and decision-making. For long stretches, I barely coded by hand because Claude handled the heavy lifting with precision and care. It understood context. It respected architecture I had built over years. It usually made thoughtful changes without needing constant supervision. I trusted it. That’s why what happened over the last month has been so hard to accept. I no longer trust it for serious coding work the way I used to. And for me, that is the real loss. I’ve been forced back into writing code by hand with Claude sitting beside me as an assistant instead of functioning like a reliable coding partner. I had not worked that way for months. Now I do, mostly because of two things, rate limits and model quality degradation. The rate-limit problems came first, and they felt like a quiet betrayal. I’m on Max 20x. A lot of us are. Sessions that used to last hours now evaporate far too quickly. A single thoughtful prompt can eat an absurd chunk of a five-hour rolling window. Yes, there are times when it seems better, but the pattern keeps coming back. Weekly caps that were supposed to feel generous now disappear in a few days under what I would consider normal usage: one session, one task, no parallel abuse, no weird stress testing. And even when the limits are technically working, the experience no longer feels dependable. New users will probably never understand how good Claude Code used to feel The Illusion of Opus 4.6 massive context window and massive nerf context window, adaptive thinking, more agentic workflows, code execution in web tools, bigger promises, bigger packaging. On paper, it sounded incredible. In practice, for my workflow, it felt like the opposite. What I wanted was not more flashy features. I wanted the core thing that mattered most to stay excellent, deep reasoning, careful edits, strong context handling, and trustworthy behavior inside real codebases. Instead, the model started feeling weaker where it mattered most. A lot of people felt that before they could prove it. The model stopped feeling like a senior engineer and started feeling like an overconfident intern trying to finish the ticket fast and move on. It still had moments of brilliance but the consistency was gone. And the 1M-context story never matched my lived experience anyway. In actual use, it often felt like quality started slipping much earlier. Past a certain point, the bigger window became more of a marketing headline than something that translated into better engineering outcomes. the thinking redaction regression I do not know exactly what changed internally but from the outside, something clearly changed. The deeper multi-step reasoning that used to make Claude so valuable on serious engineering tasks started to feel reduced. Problems that once got careful, structured thought now got shallow shortcuts. Complex debugging sessions that used to feel patient and methodical started ending in rushed guesses, partial fixes or confident but fragile answers. That shift is devastating when you use the model for real work. Because coding is not just about generating syntax. It is about tracing systems, preserving intent, respecting constraints, and solving problems without breaking everything around them. That is exactly where Claude used to shine, now it’s gone the death of surgical edits One of the best things about older Claude Code was how surgical it could be. It used to inspect enough of the codebase to understand what needed to change then make focused edits with minimal collateral damage. It felt like it had discipline. From my own logs and experience, it seems to read less before acting, which would explain a lot. The result is that it no longer feels like it is carefully entering the codebase, understanding the local architecture and making deliberate changes. It feels more impulsive. And once that happens, trust disappears fast. code mutilation This is where the decline becomes impossible to ignore. Instead of precise modifications, I’ve watched it overwrite entire sections it should have left alone. I’ve seen it insert code in bad locations, damage surrounding structure, splice changes into places that break the flow of the file, and propose “simple fixes” that ignore deeper semantic relationships. That is not a small regression. That is a workflow-breaking regression. Because once the model starts damaging files instead of editing them, you can’t relax around it anymore. Every output becomes something you have to inspect defensively. And that changes the whole value proposition. A tool that once saved cognitive energy starts consuming it. The Rate Limit Wall As frustrating as the quality drop has been, the infrastructure pain makes everything worse. The token burn has felt wildly out of proportion. Many of us saw behavior that strongly suggested large hidden overhead in requests, especially in newer Claude Code versions. Whether the company wants to call that system overhead, internal framing or something else. The lived result is the same on the user side. Sessions vanish faster than the visible work seems to justify. Peak-hour behavior also seemed real. Limits felt harsher during busy windows and over time the inconsistency spread beyond those hours too. Sometimes the same kind of work would be affordable. Sometimes it would burn through usage absurdly fast for no obvious reason. That unpredictability kills confidence. You stop planning around the tool because you no longer know whether it will last long enough to finish the job. I’m tired This is the part that is hardest to explain to people who did not build their workflow around it. When Claude Code was great, it changed the way I worked. Not in a gimmick way. In a real way. I stopped typing huge amounts of code by hand. I moved faster. I trusted the model to carry substantial implementation load while I focused on architecture, product direction, and systems thinking. It genuinely felt like a force multiplier. Now every output needs heavy verification. Now I ask it to debug something complex and it often gives up halfway through, reaches for the nearest patch or tries to talk its way around the problem. Then when called out, it gives those hollow apologies that somehow make it worse because they confirm the lack of rigor “You’re right, that was lazy and wrong.” Exactly. That’s the problem. It is exhausting to babysit a tool that used to feel self-directed. The saddest part is realizing that the thing I built around is no longer the thing I trusted. The hardest part The hardest part is not even the disappointment. It is the lock-in. A lot of us did not just casually use Claude Code. We built around it. Our workflows, orchestration, habits, prompts, project structure, custom tooling, and even the way we think about implementation got shaped around what Claude was good at. Switching providers means reworking infrastructure, retraining habits, rebuilding assumptions and accepting that months of adaptation were tied to a platform that no longer behaves the way it once did. That is what makes this feel like a trap. The bigger context window means nothing to me if the model is too constrained, too inconsistent or too shallow in practice to use that context well. What now I still believe in what Claude was. I still respect the team for building something that, at its best, genuinely changed how I worked. And maybe that is why this feels so personal. You do not get this frustrated with something that was always mediocre. You get this frustrated with something that used to be exceptional. You may stay and hoping the next update restores some of the magic but I’m tired, today is the last day I use Claude Code. I am coding by hand more than I have in years. And every time that happens, I keep coming back to the same thought. If I have to supervise this much, rewrite this much and distrust this much. Maybe I do not need a coding model the way I thought I did. submitted by /u/blackxullul
Originally posted by u/blackxullul on r/ClaudeCode
