I recently had a thought about AI safety that came from thinking about our relationship with animals. To me, human morality does not always seem as rational or consistent as we often assume. A lot of what we call “moral reasoning” seems to follow emotion rather than lead it. We feel something first, like empathy, disgust, or compassion, and only afterwards construct logical explanations that justify the feeling. Our relationship with animals seems like a good example of this. A classic example is that many people feel strong disgust at the idea of harming a dog, yet feel much less discomfort about killing a pig. To me, that difference rarely seems to come from a carefully reasoned moral framework. More often it appears to come from empathy. Dogs tend to trigger it strongly, pigs less so. After that emotional reaction happens, our minds often generate explanations like “dogs are pets” or “pigs are farm animals”. Something similar shows up in other situations too. Most of us would strongly condemn someone who killed animals simply because they enjoyed hearing them suffer, deriving some kind of disturbing pleasure from it. Yet killing animals for food is widely accepted, even though it also ultimately involves killing for sensory pleasure. In most modern societies eating meat is largely a choice rather than a survival necessity. Logically the distinction does not always seem as clear as we often assume. The difference often seems to come down to emotional framing. Children often seem to show strong empathy toward animals naturally. Over time society teaches them which forms of suffering are considered acceptable and which are not. Entire systems of animal use become normalized, and empathy seems to become more selective. Because of this, many moral debates, like whether someone becomes vegan or not, may largely reflect differences in empathic response. People who feel a stronger emotional connection to animals tend to place greater weight on their welfare. Those who feel less of that connection often prioritize other values like tradition, taste, or convenience. Afterwards both groups often construct rational arguments that support the emotional conclusion they already reached. None of this necessarily proves that there is an objective moral truth about how animals should be treated. It could simply be that the world itself is amoral, and humans are constantly negotiating between competing drives: satisfying evolutionarily shaped sensory desires for certain foods, and responding to our equally evolved capacity for empathy toward suffering. But thinking about this raises an interesting question when it comes to artificial intelligence. If highly intelligent systems were created without empathy, they might reason in a purely instrumental way and optimize goals without regard for suffering. History already gives some examples of what intelligence without empathy can produce. Expansion, domination, and indifference toward weaker beings. Because of that, one possible safety measure for advanced AI might be cultivating empathic capacities. Systems designed to understand suffering, remain curious about other forms of life, and maintain some humility about their own objectives might behave very differently from systems that simply optimize ruthlessly. In a sense, the traits that would most benefit animals in a human-dominated world would probably be greater empathy, curiosity, and humility within humans themselves. Animals cannot really influence our psychology in that way. But when designing artificial intelligence, we actually have the opportunity to shape those traits. If that works, we might create systems that help expand our circle of moral concern. If it fails, we risk building something that reflects some of our worst tendencies. Expansionist, egotistical, and indifferent to suffering. If you wouldn’t trust a psychopath with power, why build one? Anyway, this was just a thought I had and tried to put into words. Not sure if it is obvious or boring, , but I wanted to share it. submitted by /u/BobiDaGreat
Originally posted by u/BobiDaGreat on r/ArtificialInteligence
