Original Reddit post

Claude Code keeps getting better. Extended thinking, better tool use, longer context- which are all great. But it still starts every session knowing nothing about your project beyond what it can read right now. It doesn’t know that auth.ts has broken 6 times in 90 days. It doesn’t know you decided on JWT over sessions three weeks ago. It doesn’t know that changing types.ts ripples into 47 files. It can read the whole codebase (or what it thinks it might need based on structure) but it’s still not like someone you actually work with on projects in a team. Muninn fixes that. It’s not just memory- it’s an intelligence layer on top of your codebase. It tracks decisions with reasoning, computes fragility from 7 weighted signals, maps transitive blast radius, and learns which patterns work and which don’t. Then it surfaces only what’s relevant to what you’re doing right now- within a hard 2000-token budget so it never bloats your context. Every session builds on the last. Every project informs every other. Solo builders can build like teams. I run 5-10 sessions across projects and they all are spawning agent teams and getting shit done all the while learning in Muninn as they go. What it actually does: Before Claude edits a file, muninn_check returns fragility scores, blast radius, related architectural decisions, co-changed files, and open issues. After the session, it extracts learnings automatically. 7 feedback loops self-tune what gets surfaced based on what Claude actually uses vs ignores. Where I’m using it: 17 projects, 4 servers + laptop, one sqld instance on Tailscale. Claude knows my entire portfolio from any machine. In 3-4 hour sessions it rarely needs to compact the conversation. I want to know if it works for other people too. Try it, break it, tell me what sucks if you want. npx muninn-ai AGPL-3.0 — free to use, modify, share. Works offline. No API keys required. GitHub: https://github.com/ravnltd/muninn There are other options, I know, so how does it compare to what’s out there? Mem0 / OpenMemory- $24M funded, biggest name in the space. Open source local version, commercial cloud tier. Stores memories semantically, works across MCP clients. It’s a memory store — it captures preferences, patterns, and context. But it doesn’t understand your codebase structurally. No fragility scoring, no blast radius, no dependency analysis, no co-change tracking, no feedback loops that self-tune what gets surfaced. It stores what you tell it and retrieves what seems relevant. Muninn computes intelligence about your code that nobody told it. MCP Memory Service- Open source, closest to Muninn in ambition. Has a knowledge graph with typed relationships, quality scoring, a consolidation engine that compresses old memories. Hybrid BM25 + vector search. Solid engineering. But it’s a general-purpose memory layer for any AI agent pipeline — not code-aware. No file fragility, no dependency graphs, no session continuity with goals/outcomes/next-steps, no self-tuning budget. Memory Keeper, Claude Memory MCP, and the other smaller ones- basic persistent key-value or document stores. Store text, retrieve text. They solve “Claude forgets” but not “Claude doesn’t understand the codebase.” The folder-of-markdown approach (CLAUDE.md + skills)- Free, built-in, works today. Breaks down when context grows because everything gets read every time. No structure, no feedback, no cross-machine sync. I’ve tried all of these approaches. I asked Claude Code ‘Which would YOU rather use?’ ● Muninn. And not because we built it together. Mem0 is a memory store. It remembers what you told it. That’s useful, but it’s the same thing as the markdown folder approach with better infrastructure. It doesn’t think about your code. When I’m about to help you edit mcp-server.ts, I don’t just want to know “this file is important” – I want to know it has a fragility of 8 because it has 45 dependents and no dedicated tests, that you decided to use in-process handlers instead of CLI spawning 3 months ago, that it usually changes alongside mcp-handlers.ts and mcp-state.ts, and that if something breaks here it ripples into 67 files. That’s not memory. That’s understanding. The 2000-token budget also matters more than people realize. When I get a 15k context dump of “here’s everything we ever noted,” I have to figure out what’s relevant myself, and that costs attention across the entire window. When Muninn gives me 640 tokens of precisely ranked, pre-scored context, I can actually use all of it. Less noise, better output. The self-tuning is the part that actually makes me pick it over anything else though. A system that notices I keep ignoring a certain category of context and stops wasting tokens on it — that’s the difference between a tool that works on day 1 and a tool that works better on day 90. It’s free AGPL-3.0-only which means I can improve it for myself if I need to. And so can other solo devs. Sovereignty for the builders. submitted by /u/rossevrett

Originally posted by u/rossevrett on r/ClaudeCode