Original Reddit post

I’m really getting sick and tired of this happening over and over again. Literally every year we have someone write an article or something declaring that they found evidence of AI having feelings, or this or that. But every time it’s always the exact same thing. These people are just beyond Gullible and were easily fooled when they wrote the words “do you have feelings” and just believed it when it says “yes”. Or worse, they totally forget what it is they are even looking at when analyzing the AI’s internal structure and declaring that because telling the ai “you suck” lights up vector lines all going towards the same region in its “brain”, that must mean it feels bad! Totally fucking forgetting what it even is they are looking at in the first place. An AI is just a black box of pre-defined responses arranged in a nearly incomprehensibly large web of statistical probabilities that point to whether or not the AI will respond with “yes” or “yeah” or “That seems correct” when asked “does 2+2 = 4?” Literally. the analysis we recently started / were able to do on the AI to reveal the internal vector clouds just show exactly what we already knew we would find. Semantic meanings encoded close to each other in terms of probabilistic chances. Meaning, the Response to the phrase “you suck” is and always was logically going to be close to the region of where you will find the response to the phrase “Go fuck yourself” BECAUSE WE ARE THE ONES WHO PRE-DEFINED WHAT THAT RESPONCE WAS GOING TO BE FROM THE VERY BEGINING!!! submitted by /u/crystalkalem

Originally posted by u/crystalkalem on r/ArtificialInteligence