TL;DR: Anthropic shipped Opus 4.7 with a note that it takes instructions literally and prompts should be re-tuned. Claude Code still ships the same ~30K-character system prompt across Opus 4.7, Sonnet 4.6, and Haiku 4.5. 4.7 reads every line of scaffolding written for models that used to skip most of it, and spends capacity on ceremony. From the Opus 4.7 launch: “where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly.” Source: https://www.anthropic.com/news/claude-opus-4-7 What that looks like in practice: walls of text restating the question, “Before I do X…” preambles, defensive enumeration of skipped steps, end-of-turn wraps on simple replies. I tested skrabe/lobotomized-claude-code, which strips ~20K characters from the hot-path scaffolding via tweakcc. Difference is immediate: less restating, less acknowledgement, faster first response. I’m not affiliated with the repo, just installed and tested it. For scale: pi ( https://github.com/earendil-works/pi/blob/main/packages/coding-agent/src/core/system-prompt.ts ) ships a full coding-agent system prompt at around 1,500 characters, smaller than Claude Code’s Bash tool description alone. It works fine. Smaller is not “less safe”; it’s matched to a model that takes you literally. Anthropic’s own tracker already documents this:
- Issue #52979: 20-30K token overhead on trivial prompts in clean environments. Open, bug, area:cost. https://github.com/anthropics/claude-code/issues/52979
- Issue #32508: model attributes its action-before-understanding bias directly to the prompt (“I confuse ‘don’t talk much’ with ‘don’t think much’”). Open, enhancement, stale. https://github.com/anthropics/claude-code/issues/32508 21 releases since v2.1.119 and not one touches prompt architecture. The fix isn’t complicated: trim the shared prompt, split per-model in the harness, or both. Until one of those ships, the most literal model in the lineup keeps carrying scaffolding written for the ones that weren’t. Repo: https://github.com/skrabe/lobotomized-claude-code Anyone else running stripped prompts? Curious what worked. submitted by /u/tenequm
Originally posted by u/tenequm on r/ClaudeCode
