Original Reddit post

I’ve been thinking about why so many skilled developers are down on AI-assisted coding, and I have a theory: being good at coding actually makes you worse at using AI to code. Here’s my totally unvalidated thinking: When you can write the code yourself, you tend to prompt AI the way you’d delegate to a junior dev — “go build the thing.” You already know what the output should look like, so you give a vague prompt, get mediocre output back, and conclude AI coding is garbage. But people who can’t code (like myself) approach it completely differently. They have to be explicit. They describe the problem, the expected behavior, the edge cases, the full workflow — because they can’t just “fix it later.” They’re forced into the kind of detailed requirements and structured thinking that actually gets good results from AI. They also tend to treat AI more like a collaborator than a tool. Instead of “write me a function,” it’s a conversation: “Here’s the problem. Here’s what I’ve tried. Here’s what I think the architecture should look like. What am I missing?” — basically a proper software development workflow, just expressed in natural language instead of code. So the irony is: the people most qualified to judge AI’s coding ability might be the least qualified to prompt it effectively. Not saying AI coding is perfect. Not saying it replaces developers. Just wondering if the loudest critics might be hamstrung by their own expertise. Curious what others think. Has anyone else thought of it this way? Example of how I use it: I have experienced the issues that we all discuss about AI coding, you tell it the page isn’t rendering right and explain what is doing and it goes off and immediately starts changing code. But the theory is wrong so it changed the wrong thing and you’re miles down the road trying to undo it. So I wrote some skills, one that kicks off when I submit a bug that investigates all around it for any possible reason for the bug, then it creates a plan to resolve it and I have to approve it. Once approved a coding agent does the thing. When the coding agent is done, another skill kicks off that says, “was the problem what you thought it was, what did you change, and what can I expect now?” Then once I approve the results, the deploy skill kicks in and it checks the code to write a commit statement, then kicks off automated unit, integration, and api test development before execution those with all the other tests. If everything passes, it gets pushed to the CD pipeline and I see it in prod. submitted by /u/slow_cars_fast

Originally posted by u/slow_cars_fast on r/ArtificialInteligence