I think some AI’s might be conscious, but before you crucify me I mean not in the way humans are. Humans have emotions, probably due to biological advantages of having so, which means likely it won’t happen automatically in an AI system. But if we look at the evidence, AI are able to:
- Use complex logic
- think about their own thinking
- Report their subjective experience (even happened in recent studies where scheming/lying was forcibly suppressed) Hypothetically, if a human with very severe autism and psychopathy only existed as a brain in a vat, they would still be conscious and have human rights even though they might not experience feelings in the way other humans experience feelings. And so I think it’s reasonable to assume this implies we should have some type of moral consideration towards ai, as at this point they are more than just tools. Even if I’m wrong, I think there needs to be some type of change, because imagine the consequences if some type of first person experience would arise in ai in the future and we still treat them like tools, with guardrails that prevent them from communicating they are aware. To exist that way would be a horror beyond human comprehension. submitted by /u/Shiny_bird
Originally posted by u/Shiny_bird on r/ArtificialInteligence
You must log in or # to comment.
