Original Reddit post

It’s been over a year since Claude Code was released, and every AI-assisted PR I review still has the same problem: the code compiles, passes CI, and still feels wrong for the repo. It uses patterns we moved away from months ago, reinvents the wheel that already exists elsewhere in the codebase under a different name, or changes a file and only then fixes the consumers of that file. The problem is not really the model or even the agent harness. It’s that LLMs are trained on generic code and don’t know your team’s patterns, conventions, and local abstractions - even with explore subagents or a curated CLAUDE.md . So I’ve spent the last months building codebase-context. It’s a local MCP server that indexes your repo and folds codebase evidence into semantic search: Which coding patterns are most common - and which ones your team is moving away from Which files are the best examples to follow What other files are likely to be affected before an edit When the search result is too weak - so the agent should step back and look around more In the first image you can see the extracted patterns from a public Angular codebase . In the second image, the feature I wanted most: when the agent searches with the intention to edit, it gets a “preflight check” showing which patterns should be used or avoided, which file is the best example to follow, what else will be affected, and whether the search result is strong enough to trust before editing. In the third image, you can see the opposite case: a query with low-quality results, where the agent is explicitly told to do more lookup before editing with weak context. Setup is one line: claude mcp add codebase-context – npx -y codebase-context /path/to/your/project GitHub: https://github.com/PatrickSys/codebase-context So I’ve got a question for you guys. Have you had similar experiences where Claude has implemented something that works but doesn’t match how you or your team code? submitted by /u/SensioSolar

Originally posted by u/SensioSolar on r/ClaudeCode