I’ve been using Claude Code extensively on a serious engineering project for the past year, and it has genuinely been one of the most impactful tools in my workflow. This is not a post against AI coding tools. But as my team has grown, I’ve watched people struggle in a way that I think doesn’t get talked about honestly enough: using LLMs effectively for development requires a fundamentally different mental model from writing code yourself, and that shift is not trivial. The vocal wins you see online are real, but they’re not universal. Productivity gains from AI coding tools vary enormously from person to person. What looks like a shortcut for one engineer becomes a source of wasted hours for another — not because the tool is bad, but because they haven’t yet developed the discipline to use it well. The failure mode is subtle. It’s entirely possible to work through a complex problem flawlessly by hand, yet produce noticeably lower quality output when offloading the same problem to an LLM — particularly when the intent is to skip the hard parts: the logical flow, the low-level analysis, the reasoning that actually builds understanding. The output looks finished. The thinking wasn’t done. What I’ve come to believe is that the most important thing hasn’t changed: the goal is solid engineering, regardless of how you get there. AI tools don’t lower that bar, they just change what it takes to clear it. The engineers on my team who use these tools well are the ones who stayed critical, stayed engaged, and never confused a coherent-looking output with a correct one. The learning curve is real. It just doesn’t look like a learning curve, which is what makes it dangerous.
I’m not a good writer and this post is written with assistance from Claude. I won’t share our conversation to avoid doxxing myself. submitted by /u/ReiiiChannn
Originally posted by u/ReiiiChannn on r/ClaudeCode
