Original Reddit post

This is kinda ironic but I mass adopted ai coding like 6 months ago thinking Id save tons of time, and I did… on the writing part. But now i spend LONGER on reviews than i ever spent just writing the damn thing myself. Because ai code has this specific problem where it looks correct, like syntacticaly clean, runs fine, passes basic tests. But then you check the actual logic and its doing somthing insane quietly. Had Claude generate a payment service last week that was silently swallowing errors instead of propogating them. would of been a nightmare in prod. Started splitting my workflow recently, claude code for the stuff that needs carefull thinking, system design. tricky logic, anything where i need the model to reason WITH me, then glm-5 for longer build sessions because honestly it handles the multi-file grind better without hiting walls and it catches it’s own errors mid-task which means less for me to review after Still review everything obviously, but the review load droped noticeably when the model is actualy self-correcting instead of confidently shipping broken code. The whole “ai means you dont write code” thing is bs btw. You just traded writing for reviewing, and reviewing is arguably harder because you need to catch what the ai got subtley wrong. submitted by /u/Commercial_Taro_7770

Originally posted by u/Commercial_Taro_7770 on r/ClaudeCode