Original Reddit post

Asked Claude if it’s okay with being used to select military targets. It said no, because someone needs to be accountable when things go wrong and an AI can’t fill that role. And honestly it’s kind of wild this even needs to be said. Drone strikes already have a massive accountability problem, civilian casualties are consistently undercounted and nobody faces consequences. Adding AI to that chain just makes it easier to point at the machine and walk away clean. The scary part isn’t Claude specifically. Claude at least has this guardrail. The problem is that defense contractors are quietly building systems that don’t, and that conversation is happening way below the public radar. Also if an AI selects the target and the strike kills civilians, who do you prosecute? The model? The engineer? The general? Nobody knows, and that ambiguity is exactly what makes it attractive to people who don’t want to be held responsible.​​​​​​​​​​​​​​​​ submitted by /u/CombinationSpecial76

Originally posted by u/CombinationSpecial76 on r/ArtificialInteligence