I want to run Gemini, Claude and Codex (and more?), but have them almost “vote” on the proper way to do things. Such as, I say I am interested in doing “X” and then they proceed to all come up with a solution to “X” and then they vote on which is best. This could extend to testing, bugs, etc. I would think that this would need to be an Anonymous debate to some degree so the models don’t hold a bias. I’m not too worried about the idea of convergence where they all do a wrong take but vote on one like its correct. Just an experiment. So maybe Gemini comes up with a good idea and both Claude and Codex vote for it over their solutions. I think this could be a neat thing to experiment with. Are there any tools that could potentially facilitate this idea? Came from this: https://news.mit.edu/2023/multi-ai-collaboration-helps-reasoning-factual-accuracy-language-models-0918 Paper: https://arxiv.org/abs/2305.14325 submitted by /u/stiky21
Originally posted by u/stiky21 on r/ClaudeCode
