I use a workflow where an initial plan is reviewed by plan-review agents, sometimes across multiple rounds. This produces really good plans that are well thought out and easy to evaluate against during final code reviews, but it has become very time-consuming. The issue is that I have to manually check each and every review finding to filter out unnecessary complexity: fallbacks for unrealistic edge cases, unreachable error handling, unnecessary migrations, “protections against future refactors,” and similar over-engineering. Both the review agents and the plan writer are explicitly instructed not to do this, which helps, but if I don’t check each finding carefully, the plan tends to grow more unnecessarily complex with every review round. I could skip plan reviews entirely, but they do find genuine holes. How are others handling this? Opus 4.7 is the plan writer, and Opus 4.7 / GPT 5.5 are the plan review agents. submitted by /u/riskywhat
Originally posted by u/riskywhat on r/ClaudeCode
