I regularly cross-reference AI models for plans and ideas, and I’ve noticed a very specific pattern lately. I usually discuss architectural cases with Claude, generating plans and rule sets. Before reviewing them myself, I compile everything into a file, feed it to Gemini 3.1, and ask, “I have this plan, what do you think?” Here is a scenario that literally just happened: Gemini caught 4 critical security vulnerabilities and 6 other items that needed fixing. When I fed that feedback back to Claude, it just said, “Yes, the comments are correct. My mistake, I agree with all of them and I am fixing them.” It completely folded. If it leaves this many glaring holes in a basic plan (and some were truly basic things I just hadn’t noticed yet since I skipped my own review), I really don’t get why it’s the most hyped model right now. I’ve consistently seen Claude leave massive blind spots in these kinds of technical implementations. Not once have I gotten a pushback like, “No, that feedback is wrong, my original approach is solid because…” Instead, it’s just writing things that would legitimately blow up in production. My trust in Claude has tanked. They turned a great model into spaghetti. submitted by /u/Iusuallydrop
Originally posted by u/Iusuallydrop on r/ClaudeCode
