I’ve been using Claude Code full-time across 20 projects. Around last month my team and I started hitting limits consistently mid-week. Couldn’t figure out why - my prompts weren’t long and some of my codebases aren’t huge. So I wrote a hook script that logs every file read Claude makes, with token estimates. Just a PreToolUse hook that appends to a JSON file. The pattern was clear: Claude doesn’t know what a file contains until it opens it. It can’t tell a 50-token config from a 2,000-token module. In one session it read server.ts four times. Across 132 sessions, 71% of all file reads were files it had already opened in that session. The other thing - Claude has no project map. It scans directories to find one function when a one-line description would have been enough. It doesn’t remember that you told it to stop using var or that the auth middleware reads from cfg.talk , not cfg.tts. I ended up building this into a proper tool. 6 Node.js hooks that sit in a .wolf/ directory:
anatomy.md – indexes every file with a description and token estimate. Before Claude reads a file, the hook says “this is your Express config, ~520 tokens.” Most times, the description is enough and it skips the full read.
cerebrum.md – accumulates your preferences, conventions, and a Do-Not-Repeat list. The pre-write hook checks new code against known mistakes before Claude writes it.
- buglog.json – logs every bug fix so Claude checks known solutions before re-discovering them.
- token-ledger.json – tracks every token so you can actually see where your subscription goes. Tested it against bare Claude CLI on the same project, same prompts. Claude CLI alone used ~2.5M tokens. With OpenWolf it used ~425K. About 80% reduction. All hooks are pure file I/O. No API calls, no network, no extra cost. You run openwolf init once, then use Claude normally. It’s invisible. Open source (AGPL-3.0): https://github.com/cytostack/openwolf submitted by /u/LawfulnessSlow9361
Originally posted by u/LawfulnessSlow9361 on r/ClaudeCode
You must log in or # to comment.
