Okay, the title is a bit clickbait. I didn’t replace Claude Code, I made it smarter about what it reads. Give your valuable feedback See Graph results below for Verification :) Free tool: https://grape-root.vercel.app/ Discord: https://discord.gg/rxgVVgCh (For debug/feedback) I was burning tokens watching Claude Code re-read entire files when it only needed one function. So I built a dual-graph context engine that indexes a repo at file level + symbol level and feeds Claude only what’s relevant. I benchmarked 15 prompts across 6 categories and tested 3 modes : Modes: Normal Vanilla Claude Code. It decides what to read, grep, and explore. MCP-DGC Claude Code + my graph engine as an MCP server. Claude asks the graph what’s relevant, then reads those files. Claude still drives the reasoning, but with a smarter map. Pre-Inject The graph engine feeds context before Claude starts. Claude doesn’t search the repo — it gets what it needs upfront. Results Takeaways Best quality → MCP-DGC Claude still explores and reasons, but the graph keeps it focused. This produced the strongest outputs overall. Best cost → Pre-Inject Won 11/15 cost comparisons , up to 85% cheaper on feature adds. But you trade away Claude’s ability to discover context the graph didn’t predict. One refactor prompt even cost +45.5% due to over-focusing. Normal Claude Code Still solid, but expensive because it explores broadly every time. Default Currently shipping MCP-DGC as the default since quality matters more for real coding workflows. Considering adding a mode flag to switch between cost and quality. Run: dgc instead of claude Curious what people here would optimize for: lower cost or better quality? Want more info? Join Discord Quality comparison Overall comparison ($0 shows they went on loop to solve issue and took more than 10 min) submitted by /u/intellinker
Originally posted by u/intellinker on r/ArtificialInteligence
