Saw some posts last week where people asked their AI if it would like to taste a cookie. The AI gave these enthusiastic answers about wanting it, imagining the taste. One person was treating the exchange as evidence of something like proto-consciousness. Tried it myself. Claude told me it has no taste or sensory experience and wouldn’t get anything out of a cookie. GPT gave a more philosophical answer about subjectivity. Both were clear about what was missing. Then I opened an anonymous browser and asked the same thing. Got the enthusiastic cookie answer right away. Same prompt. Different actual input. The model didn’t get smarter or dumber between the two windows. The thing that changed was context. Account state, prior conversations, memory, system behavior, whatever else is wrapped around the prompt before it gets to the model. Not sure if I’m reading this right but the variability in AI responses people complain about seems like it’s often less about randomness and more about context shaping the answer. Same prompt with different context becomes a different question, even if it looks identical in the chat box. Which kind of makes the “AI gave me a different answer than my coworker” complaint a different problem than it looks like. Maybe it’s not unreliability. Maybe the AI is just answering the actual input it got, and the actual input was different. anyway, the qualia reading on the cheerful response is pretty thin. Pretty sure those enthusiastic answers are RLHF-shaped social performance, not evidence of inner experience. But the context-changing-the-answer part is what stuck with me. anyone else tested this with other prompts? submitted by /u/EquipmentFun9258
Originally posted by u/EquipmentFun9258 on r/ArtificialInteligence
