Original Reddit post

Published here : https://aiweekly.co/issues/475#start The Case for Artificial Stupidity There’s an old joke among pilots. Automation has made flying so safe and so boring that the biggest risk is now the pilot forgetting how to fly. The joke stopped being funny a while ago. In 2009, the crew of Air France Flight 447 faced a situation the autopilot couldn’t handle — iced-over speed sensors, contradictory readings, the Atlantic Ocean at night. The system handed control back to the humans. The humans, who had spent years monitoring a machine that did their job for them, didn’t know what to do. Everyone on board died. This is not an AI problem. It’s an automation complacency problem. And in a hundred years, it will be the most dangerous dynamic in civilization. Here’s the pattern. A machine does something well. Then better. Then so much better that the humans overseeing it stop paying attention because vigilance without variation is something the human brain was never designed to sustain. You can’t stare at a dashboard for eight hours and stay sharp. You can’t review an AI’s diagnostic output for the hundredth time and bring the same scrutiny you brought to the first. The better the machine gets, the less the human matters, until the one time the human matters enormously and they’ve already checked out. We know this. We’ve known it for decades. And our response, overwhelmingly, has been to make the machine even better so the human matters even less. To engineer the human out of the loop entirely. Which works — right up until it doesn’t. A century from now, AI will be unimaginably capable. It will diagnose illness with a precision no doctor could approach. It will evaluate legal cases by processing more precedent in a second than a judge reads in a career. It will make battlefield decisions faster than any human chain of command. And in each of these domains, there will be people whose job it is to oversee the machine. To be the check. The failsafe. The last pair of human eyes before something irreversible happens. Those people will be bored out of their minds. This is where artificial stupidity comes in as a design philosophy. The deliberate introduction of imperfection, hesitation, and uncertainty into AI systems because making them too good makes the humans around them worse. An AI that occasionally flags a case it could have resolved on its own. That asks a doctor to weigh in on a diagnosis it’s already 99.8% confident about. That pauses before a military decision and says, essentially, are you sure? — not because it needs confirmation, but because the human needs to stay in the habit of thinking. This sounds wasteful. And it is. That’s the point. Because the alternative is a world where humans are technically in charge but functionally asleep. Where oversight exists on paper and nowhere else. Where the surgeon reviews the AI’s plan the way you review the terms and conditions — scrolling to the bottom and clicking accept. The hard part is that artificial stupidity has no constituency. No one gets promoted for making a system slower. No company wins market share by advertising that its AI second-guesses itself. The incentives all point toward faster, smarter, more autonomous. Toward removing the friction. But friction is what keeps human judgment alive. The pause before a decision. The discomfort of not being sure. The cognitive effort of actually weighing alternatives instead of rubber-stamping a machine’s recommendation. Take that away and you don’t have oversight. You have a rubber stamp with a heartbeat. A hundred years from now, the AI systems that matter most won’t be the smartest ones. They’ll be the ones designed with enough deliberate imperfection to keep the humans around them awake, engaged, and capable of the one thing no machine can do on its own: deciding that the machine is wrong. The best AI of the future won’t be the one that never needs us. It’ll be the one that never lets us forget that it might. PS. this seems even more important to think about as this new research shows the human’s apparent fundamental inability to challenge or verify AI’s output. With the scale of AI’s output coming, it seems humanity might not be able to vet this output at all… As always, looking forward to reading your thoughts! Alexis submitted by /u/Justgototheeffinmoon

Originally posted by u/Justgototheeffinmoon on r/ArtificialInteligence