Original Reddit post

After about a year and a half of watching AI permeate through the field of software engineering, I have some thoughts and observations I’d like to share. I’m a distinguished engineer for those who care with a good chuck of experience within FAANG. AI can lead to an increase in potential productivity. For experienced folks who know exactly what they want, Claude and GPT are exceptional at boosting productivity. This not only includes the writing of software, but extends to tooling to help operations, discovery, speed through legacy flows, and so forth. AI has destroyed critical thinking across the board. Product managers, software managers, VPs, engineers, you name it - they’re all atrophying to an extreme degree. I see this everywhere, at every layer of the organization. Managers and engineers hop into Claude to offload their thinking before working through problems themselves. I’ve seen more AI generated docs than I care to count, where the author completely missed the point. Writing the document is a mode of working through your own thinking, its not solely a means to an end. This comes through in the reviews where there are clear holes, incompatibilities with existing services, and an inability to answer fair questions. Following this, is a lack of clarity. No one is thinking about the product beyond its integration with AI. This is leading to subpar features that may look “cool” but ultimately lead to subpar customer outcomes. An example of this is chat bots everywhere. There are better user interfaces for many features than chat bots, but since AI naturally connects to a chat interface, I see it everywhere. Everything has a chat interface now. What has happened is that the bell curve of talent has widened. The left side has dropped off the face of the earth, while the middle is now wider than ever. The right side (i.e. the top performers) are leaving the rest of the bell curve in the dust. The common traits I see are: Continuing to think critically while using AI as mechanical shortcut. Using AI to learn by double clicking on concepts they don’t understand - especially in the software stack. Having Claude spin up a Flink cluster without having any clue has to how Flink works is a recipe for disaster - yet that’s what the current tools do if you ignore the hard (and crucial part) of engineering. Think about the customer, be the customer, and never lose sight of the value proposition itself. These are just my general thoughts from the past 500 days or so. So in conclusion, AI can certainly be a potential scaling factor to value production, but only to those who already know how to produce value. It can also help one become better, but only if you resist the urge to let it do everything for you, and instead continue to never accept not knowing how things work. submitted by /u/element-94

Originally posted by u/element-94 on r/ArtificialInteligence