Original Reddit post

I made a Claude Code plugin that adds structured cross-model deliberation before any code gets written. The setup:

Claude = Prosecution (builds the implementation plan)

Codex CLI = Cross-Examiner (adversarially challenges it)

You = Judge (approve or reject the final verdict) 7-phase workflow: Claude plans → Codex critiques (logical flaws, edge cases, architecture, security) → Claude rebuts each objection (ACCEPT / REJECT / COMPROMISE) → Codex deliberates as neutral arbiter → verdict presented → you approve → code gets written. What makes it useful:

  • A built-in weak objection catalog auto-filters 27 false-positive patterns (style nitpicks, YAGNI, scope creep, phantom references) so the debate stays focused on real issues
  • --strict mode for harsher critique, --dual-plan where Codex builds its own plan independently before seeing Claude’s
  • Task-type checklists (bugfix, security, refactor, feature) get injected into the cross-examination so Codex knows what to prioritize
  • Auto-discovers relevant skills from both Claude and Codex and embeds them as context
  • Session logging with objection acceptance rates so you can see patterns over time Why two models? Claude reviewing its own plan catches fewer issues than having Codex adversarially challenge it. Codex is good at spotting edge cases Claude glosses over. Claude is good at defending decisions that are actually correct. The debate format surfaces disagreements that a single pass misses. Install:
/plugin marketplace add JustineDaveMagnaye/the-courtroom
/plugin install courtroom

Then invoke with /courtroom --task "your task". Supports --rounds N for multiple debate rounds, --auto-execute to skip approval, --quick for fast mode. GitHub: https://github.com/JustineDaveMagnaye/the-courtroom Happy to answer questions or take feedback. Disclosure: I built this plugin. It’s free and open source (MIT). No monetization. submitted by /u/Difficult_Term2246

Originally posted by u/Difficult_Term2246 on r/ClaudeCode