Original Reddit post

Disclosure: This is my own open-source project (MIT license). A lot of models now support huge context windows, even up to 1M tokens. But in long-lived AI coding projects, I don’t think the main failure mode is lack of context capacity anymore. It’s context accuracy. An agent can read a massive amount of information and still choose the wrong truth: an old migration note instead of the active architecture current implementation quirks instead of the intended contract a historical workaround instead of a system invariant local code evidence instead of upstream design authority That’s when things start going wrong: the same class of bugs keeps recurring across modules bug fixes break downstream consumers because dependencies were never made explicit design discussions drift because the agent loses module boundaries old docs quietly override current decisions every new session needs the same constraints repeated again debug loops turn into fix → regress → revert because root cause was never established first So I built context-governance for this: https://github.com/dominonotesexpert/context-governance The point is not to give the model more context. The point is to make sure the context it reads is authoritative, minimal, and precise. What it does: defines who owns each artifact marks which docs are active vs historical routes tasks through explicit stages requires root-cause analysis before bug fixes prevents downstream implementation from silently rewriting upstream design I’ve been using it in my own production project, and the biggest improvement is not that the model “knows more.” It’s that debugging converges faster, fixes are less likely to go in circles, design docs stay aligned with top-level system docs, and the working baseline is much less likely to drift over time. In other words, the agent is less likely to act on the wrong document, the wrong boundary, or the wrong assumption. There is a tradeoff: more tokens get spent on governance docs before execution. For me that has been worth it, because the saved rework is far greater than the added prompt cost. I’m not suggesting this for small projects. If the repo is still simple, this is unnecessary overhead. But once the project gets large enough that the real problem becomes conflicting context rather than missing context, I think governance matters more than raw window size. Curious how others are handling this. Are you solving long-lived agent drift with bigger context windows alone, or are you doing something explicit to keep context accurate and authoritative? submitted by /u/feather812002finland

Originally posted by u/feather812002finland on r/ClaudeCode