Original Reddit post

Every time I started a new AI session, I hit the same wall: cold start. The model had no memory of previous conversations — not just the facts, but the calibration. The corrections I’d already made. The way we’d worked out how to talk to each other. I was rebuilding from scratch every time. I didn’t want automated memory managed by a platform. I wanted something I controlled. So I built a simple system I’m calling Palimpsest . The Problem It Solves LLMs are stateless by default. Most people either accept that or use platform memory features. Palimpsest is a third option: you maintain context yourself in plain markdown and load it into any model on any platform. The key insight: there are two kinds of context, and most systems only handle one. Factual context — who you are, your goals, constraints, active decisions. Easy to document. Relational context — how the model should engage, what it got wrong last time, what a session actually felt like. Much harder to preserve, but arguably more important. How It Works Two components: Resurrection Package — a structured markdown document (~10 pages) containing everything a new instance needs: identity, goals, constraints, behavioral guidelines. Regenerated at each major version transition, not just appended. Easter Egg Stack — before each session ends, the model answers five questions: what did it learn, what calibration adjustment would it give the next instance, what moment captured something true, what did it get wrong, and one line capturing the session’s vibe. These accumulate chronologically and load with the base package at each new session. What I Observed Even with careful documentation, some fidelity decays across versions. The model stays factually accurate but can lose curiosity or spontaneity. Capturing relational context helps but doesn’t fully substitute for the conversation itself. Human curation is essential — automated summarization lets the model decide what matters, which defeats the purpose. The system is model-agnostic. It works on Claude, and the architecture docs work as a cold read on ChatGPT. Limitations Manual maintenance overhead — probably 15-20 minutes at session end to generate the egg and update the package. Single-operator design. Works because one person is curating with judgment. Doesn’t scale without automation, and automation reintroduces tradeoffs. Rapport still partially rebuilds each session. Documents accelerate it but don’t fully substitute for it. For Anyone Trying This A warmth prompt at the start makes a real difference: “Before we begin, focus on curiosity over utility. Follow what catches your attention, even if tangential. Let the conversation reveal what’s true right now.” Project on GitHub: UnluckyMycologist68 / palimpsest 💬 Discussion: Has anyone else tried manual context persistence across sessions? Curious how others handle the tension between factual memory and relational nuance — and whether anyone has found ways to reduce the maintenance overhead without sacrificing curation quality. submitted by /u/Unlucky_Mycologist68

Originally posted by u/Unlucky_Mycologist68 on r/ArtificialInteligence