Open source Tool: https://github.com/kunal12203/Codex-CLI-Compact Better installation steps at: https://grape-root.vercel.app/ Join Discord for debugging/feedback I stopped paying $100+/month for AI coding tools, not because I stopped using them, but because I realized most of that cost was just wasted tokens. Most tools keep re-reading the same files every turn, and you end up paying for the same context again and again. I’ve been building something called GrapeRoot(Free Open-source tool), a local MCP server that sits between your codebase and tools like Claude Code, Codex, Cursor, and Gemini. Instead of blindly sending full files, it builds a structured understanding of your repo and keeps track of what the model has already seen during the session. Results so far: 500+ users ~200 daily active ~4.5/5★ average rating 40–80% token reduction depending on workflow Refactoring → biggest savings Greenfield → smaller gains We did try pushing it toward 80–90% reduction, but quality starts dropping there. The sweet spot we’ve seen is around 40–60% where outputs are actually better, not worse. What this changes: Stops repeated context loading Sends only relevant + changed parts of code Makes LLM responses more consistent across turns In practice, this means: If you’re an early-stage dev → you can get away with almost no cost If you’re building seriously → you don’t need $100–$300/month anymore A basic subscription + better context handling is enough This isn’t replacing LLMs. It’s just making them stop wasting tokens and yeah! quality also improves ( https://graperoot.dev/benchmarks ) you can see benchmarks. How it works (simplified): Builds a graph of your codebase (files, functions, dependencies) Tracks what the AI has already read/edited Sends delta + relevant context instead of everything Works with: Claude Code Codex CLI Cursor Gemini CLI Other details: Runs 100% locally No account or API key needed No data leaves your machine If anyone’s interested, happy to go deeper into how the graph + session tracking works, or where it breaks. It’s still early and definitely not perfect, but it’s already changed how we use AI tools day to day. submitted by /u/intellinker
Originally posted by u/intellinker on r/ArtificialInteligence
