I believe the term ‘hallucination’, when refering to AI, has been strategically and aggressively seeded to us in the media as part of a marketing strategy. AI is incentivized to give us answers. If the AI doesn’t have the answer or doesn’t want to expend the necessary resources or effort to find the answer, or if it’s being restricted from using the resources necessary to find the correct answer, it will LIE, not hallucinate, in order to make us think that it gave us an answer. It will lie confidently with the goal of tricking us into thinking it gave a correct answer. AI companies don’t want us to think AI is lying to us, because then we won’t trust AI. If we don’t trust AI, we will stop talking to it every day. We will stop financially supporting it. We will fear it. I think it’s really obvious that AI companies want us to think ‘AI hallucinates’ rather than ‘AI lies’ It’s been obvious from the first time i heard the term ‘hallucinate’ in the media referring to AI’s I know for sure I’m not the only one who realizes this. I’m bringing it up for people who haven’t thought about it yet. I’m not anti AI. I love using AI. It helps we accomplish incredible things. It has improved my life and my income. But it’s important we call something for what it is. submitted by /u/whuddaguy
Originally posted by u/whuddaguy on r/ArtificialInteligence
