Saw a lot of posts today about local LLM optimizations to reduce token usage in API calls. Had a wacky inspiration. I spent the last hour whipping up a Hyperdimensional Computing MCP server that efficiently loads your code locally for your session to query (1ms query times), sends it over the API in TOOL format, and saves you over 90% of your token count with your LLM provider? (Results may vary, I’m still testing this against “real” codebases). Happy to hear any feedback that verifies it offers up an improvement. Even if the token claim is off, this should dramatically speed up code lookups without grep and the like. https://github.com/boj/opty submitted by /u/humetrical
Originally posted by u/humetrical on r/ClaudeCode
You must log in or # to comment.
