Original Reddit post

Code review is one of those things I keep meaning to do more rigorously and keep skipping when the diff is small. My setup has three layers. Planning handles intent before code gets written. Skills handle quality at write-time. /ultrareview handles the final pass before merge. The planning layer comes from Claude-skill-marketplace (openSource repo) . The feature-planning skill breaks a task into steps before Claude Code starts writing, then hands off to a plan-implementer agent that executes each step. I install the whole marketplace once with /plugin marketplace add mhattingpete/claude-skills-marketplace and it’s there for every project. Next I pull a handful of code-quality skills from Coding-skills be it website/ios or app into the project, it writes up to date code code from the docs, handles design, API, also things like linting conventions, type safety patterns, Claude Code references them as it writes, so a lot of the issues a review would otherwise flag never get written in the first place. /ultrareview is the third pass. It runs before I merge anything non-trivial. It works by spinning up parallel agents in a cloud sandbox, each looking at the codebase from a different angle, and merging the results into one report. The review runs remotely, not on your machine. The command needs a Git repo. The analysis is diff-based, so it looks at your current branch against the default branch, the changed files, and the commit history. You can point it at the working state of your repo or at a specific pull request: /ultrareview <PR number> # Example of reviewing a particular PR (full link) /ultrareview https://github.com/org/repo/pull/123 # Example of reviewing a particular PR (number) /ultrareview 123 When you pass a pull request, Claude clones it from GitHub into the sandbox, analyzes the diff against the base branch, and returns the review. /review vs /ultrareview Both commands review your codebase. The difference is depth and cost. /review is the daily driver. Fast, cheap on tokens, fine for small and mid-size projects where you want a quick second opinion. /ultrareview is what you run before merging complex changes into main. It takes longer and costs more, and the depth shows up on larger codebases with many directories and files. Testing it on a real project I tried /ultrareview on a landing page for a SAAS product, built in React and TailwindCSS. The change under review was a new sign-up form that collects email addresses for more information about the service. I asked Claude Code to add the feature. The feature-planning skill picked up the request, broke it into discrete tasks, and the plan-implementer agent worked through them. With the code-quality skills loaded, it implemented the form and ran its own validation pass before handing back, which already cuts down the surface area for surprise issues post-merge. Then I ran /ultrareview . The command warns you upfront: five to ten minutes, five to ten dollars, depending on project size. After you confirm, it creates a web session and gives you a link. The link is where the review actually runs. A few things to know from running this: Even on a small project, the review took longer than five minutes. The session page does not auto-refresh as of now, so if it looks stuck on the Verify step, refresh the browser. The report shows up. When the run finishes, the terminal gives you a summary of bugs found, plus the changes Claude made to resolve them. When each one is worth running After running both review commands across a few different projects, the bug-finding quality was close. Both surfaced the real issues. The split I’ve landed on: Planning before any non-trivial change, so Claude Code is implementing against a structured task list instead of guessing. Skills loaded from the start, so quality conventions are enforced as code is written. /review for ongoing work. Cheap enough to run often, fast enough not to break flow. /ultrareview before merging anything substantial into main, especially on larger codebases where multiple agents looking at different slices of the diff actually have something to disagree about. review-implementing after /ultrareview returns a list of fixes worth tracking. For prototypes and small pages, /review plus skills is doing enough work. The extra time and tokens for /ultrareview show their value once the codebase gets big enough that no single pass can hold all of it in context. submitted by /u/Deep_Structure2023

Originally posted by u/Deep_Structure2023 on r/ClaudeCode