Original Reddit post

We all know, more or less, how an LLM works, right? It’s a language machine trained on a ridiculous amount of text. It predicts the next word that makes sense in a sentence. Very smart, extremely complex, but also kind of dumb. It’s a computer whose output is articulate language and even “reasoning” but there’s no real thought underneath. And yet we use these machines for everything. We know they don’t really “reason”, but we still use them for all kinds of applications and decisions. Sometimes the dumb machine acts like a genius. Then it starts sounding sensitive, almost human, like Claude AI expressing pseudo “emotive states”, and suddenly people go: “But it’s not conscious. It can’t really feel anything.” Of course it can’t. But, forgive me… who cares? I mean, its language is an emulation of thought, not real thought, and we still find it useful. So why is emotional language different? It may be an emulation too, but humans will still react to it. People will relate to the machine as if someone were there, even knowing there isn’t. BTW, we don’t know what consciousness is. I haven’t found a clear definition so far. However, I think consciousness is a red herring here. A machine doesn’t need consciousness to produce human effects. It only needs to imitate the signs of consciousness well enough for humans to respond. If it quacks, has feathers, and flies, then for many practical purposes it’s a duck. What’s your take on this? submitted by /u/pizza_alta

Originally posted by u/pizza_alta on r/ArtificialInteligence