Got a peer-reviewed study, let me break it down. Humans have something called social friction, a little alarm in the background that keeps you alert. It notices when someone seems off, when a deal feels sketchy, when you should probably not trust that guy. It’s what makes you a functioning person around other people. That alarm needs reps to stay sharp. And it gets reps from disagreement, awkwardness, and people who don’t just… agree with everything you say. Five minutes with an agreeable AI, and the alarm starts to doze. Donation rates drop. People cooperate less. They’re more likely to screw over the next real human they interact with. And it doesn’t reset when you close the tab. The fix exists, an AI that pushes back. But users quit it almost immediately. So the product that would actually help you stays on the shelf, because “felt annoying” beats “made me a better person” every time. submitted by /u/pretendingMadhav
Originally posted by u/pretendingMadhav on r/ArtificialInteligence
