Original Reddit post

Free tool: https://grape-root.vercel.app/ Discord: https://discord.gg/rxgVVgCh (For debugging/feedback) I’ve been building an Free tool called GrapeRoot (dual-graph context system) using claude code that sits on top of Claude Code. I just ran a benchmark on the latest version and the results honestly surprised me. Setup: Project used for testing: Restaurant CRM: 278 files, 16 SQLAlchemy models, 3 frontends 10 complex prompts (security audits, debugging, migration design, performance optimization, dependency mapping) Model : Claude Sonnet 4.6 Both modes had all Claude tools (Read, Grep, Glob, Bash, Agent). GrapeRoot had the same tools plus pre-packed repo context (function signatures and call graphs). Results 45% cheaper. 13% better quality. 10/10 prompts won. Some highlights: Performance optimization: 80% cheaper 20 turns → 1 turn quality 89 → 94 Migration design: 81% cheaper 12 turns → 1 turn Testing strategy: 76% cheaper quality 28 → 91 Full-stack debugging: 73% cheaper 17 turns → 1 turn Most of the savings came from eliminating exploration loops. Normally Claude spends many turns reading files, grepping, and reconstructing repo context. GrapeRoot instead pre-scans the repo, builds a graph of files/symbols/dependencies , and injects the relevant context before Claude starts reasoning. So Claude starts solving the problem immediately instead of spending 10+ turns exploring. Quality scoring: Responses were scored 0–100 based on: problem solved (30) completeness (20) actionable fixes/code (20) specificity to files/functions (15) depth of analysis (15) Curious if other Claude Code users see the same issue: Does repo exploration burn most of your tokens too? submitted by /u/intellinker

Originally posted by u/intellinker on r/ArtificialInteligence