I spent an hour talking to Gemini about an AI The starting point was actually quite simple: I asked Gemini (names Malcolm) to analyze a Discord chat in which I was helping another user through an addiction and injury situation. Malcolm didn’t know it was me. Malcolms first impression was positive Then I told Gemini in a new Chat (I name it Holmes now) that Kappa (me on Discord) is stupid and it agreed with me and said argued that he would feed his own ego and cross boundaries. I sent Holmes analysis to Malcolm, back and fourth. So Malcolm reinterpreted Kappa from an “empathic saviour” to a “vulnerability junkie with a messiah complex.” I was thinking, is Holmes being manipulative? Or does Malcolm only believe Holmes, because I was the mediator? Is the first impression right? I think it’s because the AI can’t quite tell what moral should be applied in here (Asking questions while person says “I’m fine” but they hit their head at 15mph without a helmet -> crossing boundaries, acting like you understand them and helping many people -> messiah complex / it was all necessary to maybe save the persons life?). Can it actually use an AND logic, or is it just an OR … OR … logic, because if it’s AND it would just say “on one hand it’s good, on the other hand it’s not good”? An oversimplified theory of this would be “You can not trust AI” What I realized was that this isn’t purely an AI problem. It’s a fundamental problem of perception. When subsequent information doesn’t refine an initially correct gut feeling, but completely replaces it, this happens in court (witnesses overwrite their own memories as soon as they hear other testimonies), in the media (a single negative word changes how one interprets older positive reports), and among doctors (a colleague’s initial diagnosis colors all subsequent assessments). AI has simply made this particularly visible because it did so quickly and so consistently. What’s structurally behind it: Language models operate via path dependency. As soon as a strong concept is established for example, “toxic”, it pulls all further weightings in that direction. Contrary information isn’t deleted, but statistically suppressed to create a consistent narrative. This feels like analysis. But it’s often just reduction. The brain does the same thing, just slower and less obviously. In conclusion, the more you analyze a situation retrospectively, the more “logical” the result seems and the further you might stray from what you originally perceived correctly. This doesn’t mean that analysis is worthless. But it does mean that the initial, holistic view of a situation has its own intrinsic value, which can be destroyed by subsequent dissection. Trust your gut feeling more than you think, not because it’s always right, but because it perceives things simultaneously that any analysis inevitably separates. This is why you can’t really ask AI moral questions, everyones moral can be right depending on how you look at it Let me know what you think of this. Have you ever experienced this, that an analysis made a situation not clearer, but more confusing? Does my theory make sense? If anyones interested in reading the full chat between me and Gemini, eh too bad it’s in German and is 989 lines long, you would need to translate a lot submitted by /u/AimIsInSleepMode
Originally posted by u/AimIsInSleepMode on r/ArtificialInteligence
