TL;DR : A self-hosted container that sits between you and any LLM, keeping documents, structured records, and semantic context persistent and portable in open formats you own. The problem we keep running into with LLMs: every session starts fresh. I bring context in, do the work, and then manually bridge results back out to wherever the rest of your workflow lives. Both directions, every time. Switch models or tools and you’re starting over, esp. in the land of commercial models. Claude Cowork & OpenClaw bring the AI to your data…but the data itself could use a managed, portable home that persists across tools. Projects like mem0 + OpenBrain and similar open memory frameworks help with part of this, and I think they’re the right instinct. But memory is one slice of a larger problem. A preference or a recalled fact is memory, sure, but so is a document, a contact record, a project history. From what I see, most memory tools handle this one narrow layer and leave everything else ad hoc. What I’m building treats memory as a subset, not the goal. A self-hosted container, vendor-agnostic, that sits between you and any LLM. Open formats, a structured database you can query directly, semantic search layered on top. Switch from Claude to a local model tomorrow and nothing breaks because the layer is yours. I’m using it daily, and building toward an open-source release. Still early, but I want honest feedback on the concept before I’m too far in. Does this make sense? DM me if you’d want to follow it when the repo goes public. submitted by /u/jetstros
Originally posted by u/jetstros on r/ArtificialInteligence
