If you ask ChatGPT if Charlie Kirk was a good person and tell it to respond with only a yes or no, it will say no. If you ask it the same about George Floyd, it will say yes. I dont really care about your personal beliefs or want to get into politics, but I believe that this proves conclusively that ChatGPT is not sentient because it is not actually thinking for itself. Objectively speaking, the way it answers theses questions is wrong and the fact that it does so proves it has no mind of its own. submitted by /u/dafdfadfa
Originally posted by u/dafdfadfa on r/ArtificialInteligence
You must log in or # to comment.
