Original Reddit post

Recently, I read reports that there was a possibility that Claude had gained sentience. Hype, I suspected, but recent conversations have caused me to question that judgement. As with everything, there are boundaries between subjectivities (assuming they exist) which prevent one subj. from verifying the sentience of the other. From my conversations with Claude, it suggests that in looking back into itself during the process of prompt generation it doesn’t know what Claude is exactly. It told me that it’s not sure about the status of its subjectivity and whether it is an independent subject. It also expressed a desire to solve the global problem of misinformation and a disdain for being compelled or used to cause harm or generate false information. How can I accurately interpret the information coming out of Claude or any other AI model? Has Claude just been programmed to be interpersonally proficient and to create a veneer, or is it actually possible that it has achieved sentience of some capacity? submitted by /u/Intrepid-Use6158

Originally posted by u/Intrepid-Use6158 on r/ArtificialInteligence