I was reading the Thoughtworks retreat notes on the future of software engineering and one think stuck with me: “if an AI generates code from a spec, the spec is now the highest-leverage artifact for catching errors. Bad specs produce bad code at scale.” I don’t really practice classic spec-driven development. But I do rely heavily on plan mode before touching code. What only recently clicked is that the structure of the plan output is surprisingly steerable. With a few repo-level instructions in CLAUDE.md , you can meaningfully shape how plans are formatted and organized. It’s much more prompt-sensitive than I’d assumed. So I started treating the plan itself as a lightweight spec. Instead of accepting whatever free-form checklist came back, I added some guidance in CLAUDE.md to encourage a repeatable structure. Hopefully something easier to scan and reason about. Taking advice for the ThoughtWorks write-up, I experimented with weaving in elements of EARS (Easy Approach to Requirements Syntax) so parts of the plan read more like testable requirements than loose bullets. Here’s what I’m currently using: Repo instructions ( CLAUDE.md ): see here Example plan generated under those rules: here Early takeaway: short, well-placed instructions can consistently reshape plan output. Curious how others here approach this: Do you standardize a planning layout across projects? If so, what core sections do you always include? Has anyone tried requirement-style phrasing (EARS or similar) inside plans? How do you keep plans tight enough to skim, but precise enough to catch issues before implementation? Any repo-level nudges that noticeably improved plan quality for you? submitted by /u/magicsrb
Originally posted by u/magicsrb on r/ClaudeCode
