We’ve been working on Signet, an open-source memory system for AI coding agents (Claude Code, OpenCode, OpenClaw). It just hit 80% F1 on the LoCoMo benchmark — the long-term conversational memory eval from Snap Research. For reference, standard RAG scores around 41 and GPT-4 with full context scores 32. Human ceiling is 87.9. The core idea is that the agent should never manage its own memory. Most approaches give the agent a “remember” tool and hope it uses it well. Signet flips that:
- Memories are extracted after each session by a separate LLM pipeline — no tool calls during the conversation
- Relevant context is injected before each prompt — the agent doesn’t search for what it needs, it just has it Think of it like human memory. You don’t query a database to remember someone’s name — it surfaces on its own. Everything runs locally. SQLite on your machine, no cloud dependency, works offline. Same agent memory persists across different coding tools. One install command and you’re running in a few minutes. Apache 2.0 licensed. What we’re working on next: a per-user predictive memory model that learns your patterns and anticipates what context you’ll need before you ask. Trained locally, weights stay on your machine. Repo is in the comments. Happy to answer questions or talk about the architecture. submitted by /u/loolemon
Originally posted by u/loolemon on r/ClaudeCode
You must log in or # to comment.
