This has been bothering me, and I’m curious if others feel the same. Historically, new frameworks and libraries came from humans struggling: • Repetition was painful → abstractions emerged • Code was hard to reason about → better mental models formed • Scaling teams was messy → conventions and patterns evolved In short: thinking led to tooling. Now it feels inverted. AI can write 90–95% of the code: • Components • State logic • Tests • Refactors • Even architectural “suggestions” And while that’s impressive, here’s my concern: Are we still thinking deeply, or just approving outputs? It feels like: • We’re shipping more • Writing less • But also reasoning less Instead of: “What’s the right abstraction?” We ask: “Can the model generate this?” Instead of: “Is this a good idea?” We ask: “Does it work?” That’s not the same thing. The last frontend thing that genuinely felt like it pushed human thinking forward (for me) was Tailwind CSS and Next.Js, not because it was magic, but because it forced clarity around constraints and tradeoffs. You still had to think. With AI, I worry we’re optimizing for output over understanding. If: • The model decides structure • The model writes logic • The model explains the code Then what happens to: • Taste? • Judgment? • Architectural intuition? • The uncomfortable thinking that used to produce better tools? I’m not anti-AI. I use it daily. But I am worried that we’re training ourselves to be editors instead of creators. So I’m curious: • Does AI free humans to think more, or less? • Are we heading toward better software—or just faster mediocrity? • What does “senior engineering judgment” even mean in 5 years? Would love perspectives from people who are excited, skeptical, or somewhere in between. (Of course written with chatGPT) submitted by /u/Dev_Nerd87
Originally posted by u/Dev_Nerd87 on r/ArtificialInteligence
