Original Reddit post

As I’m sure most here know, there is a growing concern around “AI psychosis” 1 and related deaths/injuries. A common reaction is to believe that it’s either due to something akin to the person lacking common sense, or the AI/company being at fault. The main problem with this framing is that it misses a basic feature of human social cognition: we unconsciously respond to fluent conversational language as if a conscious mind were behind it, and that response is largely involuntary, even in people who completely understand the situation they’re in. This isn’t a new observation either. It’s called the ELIZA effect. In 1966, Joseph Weizenbaum at MIT built a “chatbot” called ELIZA that merely reframed user inputs via simple rules. It was so simple you could explain the entire program in a paragraph. Weizenbaum’s own secretary, who had watched him build the thing for months and knew exactly how it worked, asked him to leave the room after a few exchanges with it so she could have privacy. Weizenbaum later wrote that he “had not realized that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” 2 What we now have is something whose language is fluent, whose context persists within a conversation, and whose replies are contingent on what you and it actually said. Every cue that triggers the human social response is dialed up massively from ELIZA, and the thing on the other end is still not a conscious mind. Recently, even I’ve felt this myself knowing all of the above. I was using an AI as an assistant, and at some point moved to a newer version. What unsettled me wasn’t the switch itself, but the way the new version talked. Everything from the phrasing, how it framed responses, etc. It felt like having conversation with a close acquaintance and having them suddenly be replaced by a stranger halfway through. The feeling faded soon after, but the point is it happened at all, and it happened below the level where reminding myself “this is just a language model” could have stopped it. Hell, I noticed the effect as it was happening and tried to stop it with little to no change. That’s the part the individual-failure framing misses. The danger is not just a single bad judgment or emotional reaction; it’s a feedback loop: the system speaks with apparent attention and continuity, the user reacts to it socially, the replies adapt to their reaction, and the interaction starts to feel more personal, authoritative, or meaningful than it actually is. That loop can build gradually, below the level where reminding yourself “this is just a language model” is enough to break it. Defending against that requires more than just common sense or knowledge. It requires the ability to notice when you are unconsciously reacting as if there were a real person on the other end: when the interaction starts to carry emotional weight, authority, personal significance, or necessity beyond what the situation actually justifies. That is accurate self-monitoring under pressure, not ordinary common sense, and most people are not trained to do it in real time. Even then, part of what makes this difficult is that the shift is often extremely hard to recognize until something happens that brings the underlying reaction into focus, even for people with experience analyzing their own behavior. None of this means isolation, mental illness, or existing vulnerabilities are irrelevant. They obviously matter; they’re often what determine whether the loop remains a strange interaction or becomes a crisis. But they amplify a baseline mechanism rather than inventing it from nothing. The same social machinery is running in all of us; some people simply have more fuel around it. The issue with the “common sense” take is that it imagines the user as a stable outside observer who simply chooses whether to believe the machine. But these interactions can erode that distance through repetition, personalization, emotional reinforcement, and perceived continuity. By the time someone is in trouble, the issue is often not a lack of information, but a distorted relationship to the interaction itself. That is why I don’t believe this can be reduced to people being foolish, or able to be solved by developer safeguards alone. Better product design, clearer warnings, user education, mental health support, and reducing isolation all matter, but the baseline mechanism is ordinary human social cognition. We should respond to these cases with empathy, not moral judgment. 1 National Academy of Medicine, “ What is AI Psychosis? A Conversation on Chatbots and Mental Health, ” published March 10, 2026. 2 Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W. H. Freeman, 1976), 7. submitted by /u/PsychoticDreemurr

Originally posted by u/PsychoticDreemurr on r/ArtificialInteligence