Disclosure: I’m the creator of Trellis . It’s free and open source (AGPL-3.0). Didn’t really get Karpathy’s LLM Wiki post at first tbh. Then a friend said “dude, isn’t this just your spec system?” Went back and read it carefully. He’s right. Since Nov 2025 I’ve been building a workflow system for Claude Code called Trellis — and the architecture is basically identical to what Karpathy just published, except mine manages codebases instead of research notes. The short version: we kept running into the problem where AI writes great code on day 1 but your project turns into spaghetti by month 3 because it has zero memory of conventions. So we started codifying everything — coding standards, architectural decisions, past mistakes — into markdown spec files. Inject before coding, inject again for review, update specs when you learn something new. Gets better every session. Then Karpathy drops his LLM Wiki and… it’s the same thing? His raw/ = our codebase. His wiki/ = our spec/ . His Ingest = our update-spec . His Lint = our break-loop . He writes “compile then query”, we wrote “context is injected, not remembered.” He says “knowledge compounds”, we say “what AI learns in one session persists to future sessions.” Same idea, different words, zero coordination. I think this pattern keeps getting independently discovered because it’s just… how expertise works? Your brain doesn’t RAG your memories from scratch every time. It compiles experience into intuition. That’s what both systems are doing for AI. Anyone else converging on something similar in their workflow? submitted by /u/Zealousideal-Dig7780
Originally posted by u/Zealousideal-Dig7780 on r/ClaudeCode
