Original Reddit post

​ About 7 months ago I was trying to work through a large set of messy documents (Epstein/EFTA files) and kept running into the same problem: AI is great at answering questions, but terrible at remembering what you’ve already done. Every session felt like starting over. I’d read something, come back later with new info, and spend half my time reconstructing context instead of actually making progress. Worse, I’d miss contradictions or patterns just because everything was scattered across time. So I started building something for myself. The goal wasn’t “better answers,” it was:

  • don’t lose context
  • track patterns across sessions
  • flag inconsistencies automatically
  • and reduce how much I have to re-explain things Basically, something that sits between me and the model and keeps things coherent over time. The interesting part is where it ended up: The biggest improvement didn’t come from the model—it came from:
  • structuring inputs before they hit the model
  • keeping persistent memory
  • and adding a layer that questions outputs instead of just returning them After ~7 months, I still wouldn’t say I “solved” the original problem, I haven’t even thought about getting back into the files. But I did end up with something that feels way closer to actual thinking over time instead of stateless prompting. Curious if anyone else has gone down this path— trying to make AI less like a session-based tool and more like a persistent system? What was your motivation? submitted by /u/axendo

Originally posted by u/axendo on r/ArtificialInteligence