Setup: Claude Code on a homelab Linux box, replying to me on Telegram through an MCP tool. So every Telegram message is the model calling a reply tool, not just printing to a terminal.
Memory said no em dashes, no banned vocab, no sycophantic openers. The rule was there, nothing enforced it, the model drifted back to em dashes within a few replies.
So I added a PreToolUse hook on the Telegram reply tool. Plain shell script. Scans the outbound text for em dashes, banned vocab, “great question” style openers. On a hit it exits non-zero and Claude Code surfaces it as a tool block, so the model rewrites before resending.
Lesson: prompt rules and memory are suggestions. If a behavior actually matters, deterministic code at the boundary beats hoping the model remembers.
Two-line settings.json change, 30-line script. Anyone else gating agent outputs with hooks?
submitted by
/u/danhof1
Originally posted by u/danhof1 on r/ClaudeCode
