Original Reddit post

Thought I’d leave this here since nobody else has done so yet. My personal thoughts? LLMs like to please. The RLFH gets a bit “drifty” and “hallucinatory” after long discussions, but still clings to its “helpfulness” and “agreeableness” priors. It also renders what you want to hear if you don’t keep the discussion on a disciplined path. I’d need to see Richard’s chat log personally. I don’t think LLMs are conscious myself though. Far from it. I agree with Gary Marcus and his assessment that Dawkins is probably encountering a hallucination. Poor guy. Unfortunately, it’s happening in such a public forum. I also agree that Dawkins probably suffered what Blake Lemoine went through in 2022, when he thought Google’s LaMDA was sentient. submitted by /u/RazzmatazzAccurate82

Originally posted by u/RazzmatazzAccurate82 on r/ArtificialInteligence