Original Reddit post

I built a spatial navigation layer for AI agents that outperforms RAG on complex retrieval tasks TL;DR SenseMCP is an open-source MCP server that replaces flat cosine similarity search with cone-based spatial navigation through embedding space. Your AI agent doesn’t just search it navigates: looking in directions, moving through concepts, finding paths between ideas, and remembering where it’s been. On a 1K-item benchmark across 5 knowledge domains: 20x more cross-domain connections found , 2x more bridge items on multi-hop reasoning, and 76% broader coverage on exploration while matching RAG on simple lookups. GitHub: https://github.com/Toni-d-e-v/sense-mcp The problem with RAG RAG is stateless and one-directional. You embed a query, fetch the top-k most similar items, and stuff them into context. This works great for “find me documents about X” but falls apart when you need to: Find connections between distant concepts items that bridge “ecology” and “machine learning” share no keywords, so cosine similarity never surfaces them Explore broadly RAG returns a tight cluster of similar items, missing entire subdomains Reason across multiple hops a question like “how does reinforcement learning apply to drug discovery?” needs items from RL, from drug discovery, AND from the intersection How SenseMCP works Instead of a search box, think of a field of vision . Every piece of data is a point in space. Your agent has a position, can look in directions, and can move through the space. The navigation loop: Scan the space see what topics/domains exist Look in a direction see what’s nearby in that direction Move toward interesting results shift your position Look again you see different things from a new vantage point Path between concepts find stepping-stone items connecting two ideas Intersect multiple directions find items that sit at the overlap of two topics Each action updates the agent’s spatial state. The agent builds up understanding of the space over multiple steps like exploring a map instead of typing into a search bar. What’s under the hood 14 MCP tools, 8 core algorithms, 3 embedding backends. The full technical breakdown (cone queries, HNSW indexing, MMR diversity reranking, etc.) is on the GitHub README here’s the short version: Results get deduplicated and reranked so you don’t get 10 near-identical items When a query is too narrow, the system automatically widens the search Complex queries get broken into sub-queries that search in multiple directions A strategy advisor picks the best navigation approach for each query type At 50K items, indexing gives a ~90x speedup over brute-force search Embedding backends: Built-in local embeddings (free, no API key), or OpenAI if you want higher quality. Benchmark highlights Full benchmark details with all the tables are on GitHub. Synthetic benchmark (1K items, 5 domains) Wikipedia benchmark (166 real article chunks) Tested against real Wikipedia content with real embeddings: Neuro + AI intersection: SenseMCP found 10/10 relevant keywords vs RAG’s 9/10 Math + Physics + Info theory: SenseMCP reached 3 domains , RAG only reached 2 Ecology + Computation: SenseMCP bridged Biology and CS RAG stayed stuck in Biology Information theory exploration: SenseMCP reached 5 domains including Physics that RAG never touched The pattern: RAG returns more keyword-precise results from a narrow region. SenseMCP trades some keyword precision for cross-domain reach it finds results in domains that RAG never gets to. The honest take SenseMCP is not better than RAG at everything. For simple “find me documents about X” queries, cosine similarity is fast and effective SenseMCP matches it but doesn’t beat it. Where SenseMCP shines is discovery : finding connections between distant concepts, exploring broadly across a knowledge space, and multi-hop reasoning that requires bridging multiple domains. These are exactly the tasks where current RAG pipelines fall flat. It’s also slower multi-step navigation takes 20-200ms vs RAG’s 3-5ms. Fine for agent workflows where quality matters, not for autocomplete. Try it git clone https://github.com/Toni-d-e-v/sense-mcp cd sense-mcp npm install npm run benchmark # synthetic benchmark (instant, no API key) npm run benchmark:wiki # wikipedia benchmark (real embeddings) No API key needed for local embeddings. Works with Claude Desktop or any MCP-compatible client. MIT licensed. Interested in: feedback on the cone query approach, ideas for better exploration strategies (the keyword gap on exploration tasks bugs me), and whether anyone has datasets where they’d want to test this. submitted by /u/Toni1012

Originally posted by u/Toni1012 on r/ArtificialInteligence