Original Reddit post

Disclosure : this is my own open source project. The problem I kept running into: my system generates a signal but on uncertain days I had no principled way to decide whether to trust it. So I built a structured debate layer that runs before any decision executes. How it works: Five agents run a two-round debate. Bull, bear, devil’s advocate, domain specialist, and a rule-based sanity checker that’s deliberately non-LLM to anchor the debate in hard constraints the others can’t rationalize around. Round 1 each argues independently. Round 2 they read each other’s arguments and respond before a judge synthesizes a verdict: proceed, reduce size, or halt. What I learned: The hardest problem wasn’t building the debate — it was preventing sophisticated consensus. Agents that can read each other’s arguments tend to converge, not diverge. The devil’s advocate role helps but isn’t enough on its own. My judge currently scores argument quality, which means a well-constructed wrong argument can beat a clunky right one. The fix I haven’t implemented yet: force bear and devil’s advocate to propose a concrete counter-action with a cost attached, so the judge compares decision quality not rhetoric. The rule-based sanity checker ended up being more valuable than expected — it catches constraint violations the LLM agents consistently miss or rationalize around. Open question I genuinely don’t have an answer to: does adversarial structure actually reduce groupthink or does it just produce more articulate consensus? Curious if anyone has done work on deliberation architectures or knows relevant papers. Repo: github.com/ScottDongKhang/Ascent_Capital​​​​​​​​​​​​​​​​ submitted by /u/The_SpaceNerd

Originally posted by u/The_SpaceNerd on r/ArtificialInteligence