What makes this particular round of technological change different from previous ones, and what makes the coping mechanisms around it more dangerous than usual, is the speed. People disproportionately prefer the current state of affairs, even when alternatives are measurably better, and this preference strengthens as the number of available options increases. The mechanism underneath isn’t stupidity or laziness. It is loss aversion applied to identity. When you have spent fifteen or twenty years building expertise in a specific domain, that expertise becomes part of how you understand yourself. It is what justifies your salary, your title, your seat at the table. The suggestion that a tool might compress the value of that expertise, or redistribute it, or make parts of it accessible to people who didn’t put in the same years and hard yards, triggers something that feels like an attack even when it isn’t one. The natural response is to find reasons why the tool can’t possibly do what it appears to be doing. And conveniently, AI provides an inexhaustible supply of such reasons, because it is, in fact, imperfect. The trap is that ‘imperfect’ doesn’t mean ‘useless’. Imperfection is the condition of every tool that has ever existed. The first commercial aircraft couldn’t fly in bad weather. The early internet went down constantly. Mobile phones in the 1990s weighed a kilogram and dropped calls in buildings. Nobody looked at any of those technologies and concluded that the smart move was to wait until they were perfect before learning how they worked. Yet that is precisely the position many experienced professionals are taking with AI, and whataboutism provides them with just enough intellectual cover to feel rigorous and righteous rather than scared. What about security? What about governance! The alternative isn’t to abandon caution. It is to be honest about the difference between caution that leads to better decisions and caution that functions as a socially acceptable way to avoid making decisions at all. The article explores this in a bit more detail for those interested. submitted by /u/iainrfharper
Originally posted by u/iainrfharper on r/ArtificialInteligence

