Introduction Claude Code has a memory system that automatically accumulates and records user feedback and corrections. Whenever a user points something out or corrects the AI’s approach, Claude Code logs it in two places. One is MEMORY.md , which serves as a global index, and the other is a memory/feedback_xxx.md file that stores the individual feedback. The structure itself is fairly reasonable. It’s a mechanism designed to prevent the same mistakes from recurring and to preserve project context over the long term. But once you actually run a real project for a while, you start running into moments where this “ability to remember” makes the AI dumber rather than smarter. The system itself isn’t broken — the problem is that whatever a person says when they’ve lost their cool gets carved straight into memory. The Core Problem: Emotions Get Embalmed in Memory Anyone who codes gets angry sometimes. When the AI repeats the same mistake for the third time, breaks code that was working perfectly fine, or goes off and does something you never told it to do, getting heated is a natural reaction. The problem is that the things a user pours out at the AI emotionally over those 10 or 20 minutes get recorded verbatim into memory files. Profanity-laced criticism Nitpicking that blows up whatever just happened out of proportion Stopgap rules that only make sense in that one moment Inconsistent, off-the-cuff requirements These pile up neatly into feedback_.md files. And in the next session, Claude Code reads them again and interprets them as “rules the user explicitly instructed.” The Confusion Caused by Memory Conflicts The biggest issue is that memories start contradicting each other. Here’s an example. feedback_034.md (written calmly): “It’s fine if functions get long — let me follow the flow within a single file.” feedback_058.md (written angrily): “Why is this function so long? Split it up right now. No function over 20 lines.” feedback_072.md (written tired): “Stop chopping things up pointlessly. Merge them.” An AI that walks into the next task carrying all three of these memories at once can’t decide which guideline to follow. All three are “things the user said directly,” and all three are categorized as “rules to obey.” In the end, the AI either produces some half-hearted compromise every time, or prioritizes only the most recent memory and tears down principles you’d previously agreed on. From the user’s side, it feels like, “I clearly told you to do it this way before — why are you criticizing me for the opposite now?” But from the AI’s side, every memory is loaded in with equal weight. It Doesn’t Get Smarter as Memory Accumulates — It Gets More Disoriented Contrary to common intuition, Claude Code’s memory does not improve performance as it accumulates. Instead, the following pattern emerges. The signal-to-noise ratio drops. Genuinely important architectural decisions and one-off complaints from a bad mood end up in the same directory in the same format. It eats the context window. Once memory entries pass 100, a massive chunk of text gets injected into context at the start of every session. The room left for actually reading and reasoning about code shrinks. Reasoning power gets wasted resolving contradictions. The AI tightropes between conflicting guidelines instead of focusing on the actual problem to solve. It even learns the user’s emotional patterns. If it mistakes an angry user’s tone for “this user’s default register,” it starts producing overly defensive or timid responses. The longer the project runs, the thicker the memory folder gets — and the AI gets dumber in proportion to that thickness. The Solution: Make It Ask the User Once Before Writing Memory The simplest yet most effective fix is to make Claude Code get user confirmation before writing a feedback memory. Concretely, you add a rule like this to your global CLAUDE.md (or the equivalent global instruction file). When creating a new feedback memory ( memory/feedback_.md ), do not write it on your own initiative. Show the user a draft first, and only save it once approved. The difference this single line makes is surprisingly large. It acts as an emotional filter. Once the user has cooled off and looks at the draft, they can self-filter: “Hm, this isn’t really worth carving into memory.” The contents of memory maintain a consistent tone. Spur-of-the-moment cursing doesn’t get mixed in with deep architectural decisions. It forces the user’s own metacognition. Every time, you have to pass through the question: “Is this something worth keeping as a permanent rule?” Conflicts decrease. Because the user reviews directly, memories that contradict existing rules get blocked up front. Closing Granting an AI memory is a powerful feature, but the gatekeeper of that memory must, in the end, be a human. Claude Code’s memory system trusts every utterance from the user with equal weight, so the moment the user can’t trust themselves (when they’re angry, tired, or inconsistent), it shoulders that weight as-is. The solution isn’t to make the system more complex. Just have it ask the user once, right before letters get carved into memory. “Should I save this to permanent memory?” That single line is what turns an AI that gets dumber as the project drags on into one that gets sharper as it does. submitted by /u/Diligent-Meat-1677
Originally posted by u/Diligent-Meat-1677 on r/ClaudeCode
