Original Reddit post

Given enough sessions, old decisions surface as new problems. An architecture choice you settled weeks ago gets re-evaluated. A framework you committed to gets second-guessed. Beyond the wasted tokens, this is structural damage. Claude is operating against the version of the project that lives in its context rather than the one you’ve worked out. Your documentation knows what’s settled, but Claude can’t surface this information. I’d been building this in Obsidian and Claude Code since January with a PARA structure and hard-coded directives in CLAUDE.md telling Claude which folder meant what. It mostly worked. The issue was with settled decisions getting re-derived, adopted frameworks being tied to the context window, and problems of the like. This isn’t a memory problem, Smart Connections allow semantic search so on a relatively small vault, the issue couldn’t be that. The issue was two folded into one. The system didn’t accurately reflect the state of the user, and the state it did capture wasn’t loaded at the right time. Loaded at startup What I’ve come to is a system that (1) holds the information that the user needs loaded at runtime, and (2) loads said information at startup so it persists throughout the chat. What I want loaded: future tasks so the AI knows what to work on, insights so it works the way I think, and settled decisions so work doesn’t have to be redone. This is a past, present, and future system. Together these capture the state of mind of the user. Let’s take a look at an example for each. “Marshal pattern is an architectural primitive, not a coordination convenience” is an entry in my decision log. It captures a design rule: one skill calls multiple sub-skills rather than bloating any single skill’s actions. If I ever work on skills again, this gets recalled and the AI recommends the marshal pattern. It needs to be loaded into every session. It is. Related to this is the insight that caused me to research this decision. In my Field Notes (insight log) we have the entry “conventions cannot force their own exercise; behavioral rules fail under task pressure.” This forced the mindset of each session to attempt to code scripts when possible. Without this being loaded I would have to reexplain or rederive this insight each session. Finally there are Roadmaps, or task lists, which hold what needs to be done, in which order they must be done, what blocks them, which project it belongs to, etc. I want this because it tracks progress for each project, and I don’t want to be explaining every tiny detail every session. Startup and close cycle The second half of the problem was loading this information into session context. I solve this by running a /startup skill at the beginning of each Claude Code instance. The startup skill loads these three key files so that responses stay anchored to the established context. During each session I work with the AI, and at the end I need to push information back into the vault, done by a /close skill which analyzes the chat transcripts and pushes insights, task updates, and decisions back into the system. These commands (alongside others not mentioned) form the lifecycle of information flow within the system. Folders as a trust hierarchy This system alongside a modified PARA structure allows the system to know what information to access, when to access it, and what the information means. The folder structure follows a modified PARA: 00_System - system files 01_Inbox - unsorted work 02_Projects - work with defined end states 03_Areas - work without defined end states 04_Knowledge - cross-cutting knowledge I generate 05_Reference - externally authored documents 90_Archive - completed or inactive This folder structure works with the information processing to tell Claude what’s in a file. When I get contradictions, I check the folder it’s in to weigh the trust of the document. A quick note in inbox is less important than an externally authored document. 90_Archive has a different naming standard and is thus trusted lower than even 00_System. The human gate Let’s look at why the reflection of the human state is important. The black pill story that goes around is that AI is here to replace humans. I don’t think this is true. The role of the human at the plateau of LLMs will be vision, the products being built, and constraints that are imposed when judgement is actualized. Smarter AI makes this system better and makes the human more important, not less. This means that the human is the final gate that information must pass through to be accepted into the system which is what /close does. It pulls the thinking of the user from the chat session into their files. The code might not be written by a human, but what they want the code to do and the principles they want maintained are all stored within the system. Every decision log entry and every field note stores this human state. The human gate should be baked into the systems of a PKM. Without it, the PKM becomes a managed memory system for agents. Promotion and pruning One challenge of AI PKMs is in the removal of information. Field Notes has a process for surfacing information. When the close skill catches an insight, it lands in an “Emerging Patterns” section, and if the same pattern emerges across multiple sessions it climbs up the tiers until eventually it becomes an “Active Principle.” Each tier is weighted more heavily but lower level tiers are still consulted when necessary. Active principles can also be demoted to “emerging” if contradictions surface, alongside being periodically reviewed. The Decision Log stores decisions, but let’s say a decision the system caught and populated was a super niche non-relevant piece of information like “on home page, spacing of image makes us want header a little bit off center” or something like that. Twelve months down the line that isn’t relevant at all. In fact, storing this kind of information degrades the system’s usefulness, as it could be referenced as relevant info at a time it’s not helpful. Both examples show that the system collects information nonstop. To be self-sustaining, it must promote and prune. This flows through the human who has executive power over what is done. This happens in a Weekly Review skill, another key piece of the /startup and /close cycle. This skill audits work item status, surfaces decisions that have aged out of relevance, runs the promotion and demotion cycles on Field Notes, and refreshes what’s loaded at startup. The goal of this system, said plainly, is a meshing of the working state of the human and the AI, an optimization towards a zero-friction environment. This is how I define congruence: the system’s working state, kept current with mine. Disclosure: I built this and sell the system as an info offer. Not pitching, just here to discuss the ideas. submitted by /u/GraeDaBoss

Originally posted by u/GraeDaBoss on r/ClaudeCode