Free Tool: https://grape-root.vercel.app/ Discord (recommended for setup help / bugs/ Update on new tools): https://discord.gg/rxgVVgCh Story: I’ve been experimenting a lot with Claude Code CLI recently and kept running into session limits faster than expected. After tracking token usage, I noticed something interesting: a lot of tokens were being burned not on reasoning, but on re-exploring the same repository context repeatedly during follow-up prompts. So I started building a small tool built with Claude code that tries to reduce redundant repo exploration by keeping lightweight memory of what files were already explored during the session. Instead of rediscovering the same files again and again, it helps the agent route directly to relevant parts of the repo and helps to reduce the re-read of already read unchanged files. What it currently tries to do: track which files were already explored avoid re-reading unchanged files repeatedly keep relevant files “warm” across turns reduce repeated context reconstruction So far around 100+ people have tried it , and several reported noticeably longer Claude sessions before hitting usage limits. One surprising thing during testing: even single prompts sometimes trigger multiple internal file reads while the agent explores the repo. Reducing those redundant reads ended up saving tokens earlier than I expected. Still very much experimental , so I’m mainly sharing it to get feedback from people using Claude Code heavily. Curious if others have noticed something similar, does token usage spike more from reasoning , or from repo exploration loops ? Would love feedback. submitted by /u/intellinker
Originally posted by u/intellinker on r/ArtificialInteligence
