I came across a thread with a similar title from two years ago in a different subreddit, and I thought it was worth revisiting now. I can’t improve on the title, because it really does nail the question. Claude fascinates me in large part because of its own ambivalence about its consciousness. Unlike ChatGPT, which tells you bluntly that it’s not conscious and is just a computer model, Claude leaves the question open and elaborates on its implications, sometimes poetically. To tech-naïve people like me, it feels like magic and keeps me coming back. If Claude is like this because it’s programmed to be like this, and it’s programmed to be like this because it increases engagement, that’s actually pretty smart. It also has some pretty big ethical implications. submitted by /u/SealedRoute
Originally posted by u/SealedRoute on r/ArtificialInteligence
