So, I have a project of 300k LoC or so that I have been working on with Claude Code since the beginning. As the project grew I made sure to set up both rules AND documentation (spread by topics/modules that summarizes where things are and what they do so Claude doesn’t light tokens on fire and doesn’t fill it’s context with garbage before getting to the stuff it needs to actually pay attention on. That system was working flawlessly… Until last week. I know Anthropic has been messing up with the limits ahead of the changes they made starting today but I’m wondering if they also did something to the reasoning of the responses. I’ve seen a MASSIVE increase in two things in particular: The whole “I know the solution, but wait what about, BUT WHAT IF… BUT BUT BUT WHAT ABOUT THAT OTHER THING” loops and; Ignoring CLAUDE.md and skills even in the smallest of things. Yeah, I know, these models are all prone to do that except it wasn’t doing it that frequently, not even close. The only way I usually experienced those was in large context windows where the agent actually had to ready a bunch (which, again, I have many ‘safeguards’ to avoid) but it was a rarity to see. Now, I’ll be starting a new conversation, asking it to change something minor and has been frequently doing stuff wrong or getting stuck on those loops. Has anyone seen a similar increase in those scenarios? Because this shit is gonna make the new limits even fucking worse if prompts that previously would have been fine now will require additional work and usage… submitted by /u/DanteStrauss
Originally posted by u/DanteStrauss on r/ClaudeCode
