Original Reddit post

I added a 2-line context file to Claude’s system prompt. Just the language and test framework, nothing else. It performed the same as a 2,000-token CLAUDE.md I’d spent months building. I almost didn’t run that control. Let me back up. I’d been logging what Claude Code actually does turn by turn. 170 sessions, about 7,600 turns. 59% of turns are reading files it never ends up editing. 13% rerunning tests without changing code. 28% actual work. I built 15 enrichments to fix this - architecture docs, key files, coupling maps - and tested them across 700+ sessions. None held up. Three that individually showed -26%, -16% and -32% improvements combined to +63% overhead. I still think about that one. The thing that actually predicts session length is when Claude makes its first edit. Each turn before that adds ~1.3 turns to the whole session. Claude finds the right files eventually. It just doesn’t trust itself to start editing. So I built a tool that tells it where to start. Parses your dependency graph, predicts which files need editing, fires as a hook on every prompt. If you already mention file paths, it does nothing. On a JSX bug in Hono: without it Claude wandered 14 minutes and gave up. With it, 2-minute fix. Across 5 OSS bugs (small n, not a proper benchmark): baseline 3/5, with tool 5/5. npx @michaelabrt/clarte No configuration required. Small note: I know there’s a new “make Claude better” tool every day, so I wouldn’t blame you for ignoring this. But it would genuinely help if you could give it a try. Full research (30+ experiments): https://github.com/michaelabrt/clarte/blob/main/docs/research.md submitted by /u/-Psychologist-

Originally posted by u/-Psychologist- on r/ClaudeCode