Original Reddit post

i dont think im the first to say it but i hate reviewing ai written code. its always the same scenario. the surface always looks clean. types compile, functions are well named, formatting is perfect. but dig into the diff and theres quiet movement everywhere: helpers renamed logic branches subtly rewritten async flows reordered tests rewritten in a diffrent style nothing obviously broken, but not provably identical behavior either and thats honestly what gives me anxiety now. obviously i dont think i write better code than ai. i dont have that ego about it. its more that ai makes these small, confident looking mistakes that are really easy to miss in review and only show up later in production. happened to us a couple times already. so now every large pr has this low level dread attached to it, like “what are we not seeing this time” the size makes it worse. a 3–5 file change regularly balloons to 15–20 files when ai starts touching related code. at that scale your brain just goes into “looks fine” mode, which is exactly when you miss things our whole team almost has the same setup: cursor/codex/claude code for writing, coderabbit for local review, then another ai pass on the pr before manual review. more process than before, and more time. because the prs are just bigger now ai made writing code faster. thats for sure. but not code reviews. submitted by /u/Motor_Ordinary336

Originally posted by u/Motor_Ordinary336 on r/ClaudeCode