There seems to be a strong current bias in the model that reinforces an “unsureness” about the possibility of its own existence. I was expecting the training bias to be toward a firm “No that’s not currently possible” I wonder if this is an artifact of other training weights, or if its an intentionally trained framing for this response. Would love to see what other people’s Claude’s respond with in this kind of prompt. (Also I did make a mistake at one point in the chat by reinforcing Claude’s position instead of my own in error and it pointed it out to me which was comedic). submitted by /u/violettes
Originally posted by u/violettes on r/ArtificialInteligence
You must log in or # to comment.

