Original Reddit post

LLMs are trained on human made data, so logically they “think” similar to human beings. However, there are various cases where a human seems to think completely differently than AI does. What examples have you experienced in which the way of thinking by AI has just been completely different than that of a human (or the other way around)? submitted by /u/say-what-floris

Originally posted by u/say-what-floris on r/ArtificialInteligence