Has anyone else noticed how some Claude Code sessions cost you a few cents and others somehow burn through actual dollars and you can’t really tell why after the fact? I kept hitting this — was it retry loops, was it the agent re-reading the same files four times, was the context filling up before compaction kicked in? The JSONL files in ~/.claude/projects/ technically have everything you need but reading them raw is rough. So I ended up writing a small VS Code extension for myself that just parses those transcripts and lays the session out as a timeline:
- every tool call, every Read/Write/Edit
- per-step token + USD cost
- cache hit ratio
- subagent attribution
- a handful of rules that flag stuff like duplicate reads, retry loops, and context pressure It started as a weekend thing but I kept adding tabs (cost breakdown, a dependency graph of file ops, context window usage) and now I genuinely use it after most sessions to see what the agent actually did vs. what I thought it did. Pushed it to GitHub as Argus in case anyone else wants to poke at their own sessions — everything runs locally, just reads the JSONL files Claude Code already writes. No login, no upload. Mostly posting because I’d love to hear what patterns you would want flagged — I’ve got the obvious ones but I’m sure people running heavier agent workflows than me have seen failure modes I haven’t. Repo: https://github.com/yessGlory17/argus submitted by /u/fIak88
Originally posted by u/fIak88 on r/ClaudeCode
You must log in or # to comment.
