After a few weeks of longer agentic runs going sideways, I traced most failures to the same root cause: too much ambient context flooding the window before any real work started. My CLAUDE.md had grown into a wall of project history, decisions, and style notes. Claude would read all of it, then start pulling in large files to orient itself, and by the time it hit the actual task it was already fighting context pressure. The fix was splitting CLAUDE.md into a lean top-level file with only what affects every session, and task-specific instruction files I reference explicitly in my prompts. Task scoping turned out to matter as much as the config. I used to hand Claude a vague directive like ‘refactor the auth module’ and let it explore. Now I write prompts that specify the entry file, the boundary of what should change, and what a passing test run looks like. That structure alone cut my mid-run derailments significantly. Claude spends less time figuring out what counts as done. For parallel work, spawning subagents on genuinely independent tasks is worth the overhead once you have scoping figured out. Where I ran into trouble early was assigning subagents tasks that touched shared files. They’d stomp each other. Now the rule is: one subagent, one file boundary. Boring constraint but it holds. The thing I wish I’d tracked earlier is what actually pushes context usage up in a single session. Long test output piped directly back into the window is a quiet killer. I now redirect verbose test output to a file and tell Claude to read the summary lines only. Small change, noticeably longer useful runs before things get fuzzy near the window ceiling. submitted by /u/ShellSageAI
Originally posted by u/ShellSageAI on r/ClaudeCode
