Original Reddit post

I had an interesting conversation with ChatGPT about Dr. Fauci today. I’m not into conspiracy theorist or into political debates, I just didn’t know anything about the guy so I asked Chatgpt to give me some facts about him. At the end of the conversation I asked it if Dr. Fauci had been pardoned by President Biden, and it said No, which was surprising because I was certain he had been pardoned by Biden. But I believed Chatgpt. But curious about the people that Biden indeed had pardoned I asked to generate a list of high-profile pardons and the list actually included Fauci. WTF! It was a total contradiction. One minute it’s no, the next it’s yes. This made me feel uneasy, since I’m blindly relying on LLMs for information. But can we truly rely on these llms to give us accurate information? submitted by /u/forevergeeks

Originally posted by u/forevergeeks on r/ArtificialInteligence