Free tool: https://grape-root.vercel.app/ Discord(bugs/feedback): https://discord.gg/rxgVVgCh Story starts here :) I’ve been experimenting with an MCP tool that extends Claude Code usage by optimizing how context is fed to the model. Instead of dumping full repo context every time, it uses a dual-graph structure + file state hashing to surface only the relevant parts of the codebase. The goal is simple: reduce wasted exploration tokens. What I’m seeing so far: • Bug fixing: ~71% fewer tokens used • Refactoring: ~53% fewer tokens used For broader tasks like architecture explanations or open-ended debugging, the savings aren’t always there because those naturally require wider context. But when used correctly for targeted tasks (bug fixes, refactors, focused edits) it noticeably extends how far your Claude Code budget goes. 120+ People saved about $60 in usage . Instead of upgrading to the $100 Claude plan , They just ran two $20 plans and still had room because the token usage dropped so much. The tool is called GrapeRoot, it’s basically an MCP-based context optimization layer for AI coding workflows. Curious if others building MCP tools or context orchestration layers are seeing similar patterns when optimizing LLM coding workflows. https://preview.redd.it/9durtp5rxvog1.png?width=578&format=png&auto=webp&s=ce78cbf0f779242689e974b8fabd1ec25cfb4c39 submitted by /u/intellinker
Originally posted by u/intellinker on r/ArtificialInteligence
