I have been using Gemini for a long time, and I usually cross-check its responses with other AI models. One issue I’ve noticed is that Gemini tends to hallucinate quite often. It also seems to adjust its tone too much based on the user’s preferences rather than focusing on factual accuracy. Whenever I point this out, it often responds with phrases like, “You have hit the nail on the head,” which becomes irritating when repeated frequently. Another frustrating issue is that it unnecessarily brings up details from previous conversations, even when they are completely unrelated. For example, if I once discussed dosa, a South Indian food, in one conversation, and later had a serious discussion about geopolitics, Gemini might suddenly insert something like, “As you like dosa from South India…” into the response. This feels irrelevant and distracting, especially in serious discussions. Until now, I was willing to overlook some of these issues, but recently I’ve started noticing more obvious mistakes and misinformation. It sometimes fails to identify even basic facts. For instance, if I ask for the famous movies of a particular actor, it may list movies of a different actor instead. I hope Google can improve Gemini’s factual accuracy, reduce hallucinations, and make its memory usage more relevant and context-aware. submitted by /u/Kalyankarthi
Originally posted by u/Kalyankarthi on r/ArtificialInteligence
