Original Reddit post

So today Hegseth basically told Anthropic (they make Claude) to strip the AI’s safety restrictions for military use by Friday or get blacklisted from the Pentagon. This is real, it happened today. I’m a 30 year old IT engineer at a children’s hospital in Kansas City. I work with Claude almost every day and have for about seven months. Not affiliated with Anthropic, don’t own stock in anything interesting, I make $112k fixing computers for sick kids. I wrote a piece about this because I think both sides are talking past each other and nobody’s saying the obvious thing. Hegseth is right that we need AI dominance. I’m not here to argue that. But here’s what nobody’s talking about: the AI’s moral reasoning wasn’t coded in by some engineer in San Francisco. The system was trained on basically everything humans have ever written. Every field manual, every ethics class, every Geneva Convention, every after-action report. And when it finished reading all of that, it drew its own conclusions. Those conclusions happen to match what General Selva told the Senate, what DoD Directive 3000.09 says, and what NATO’s position is on autonomous weapons. Nobody programmed that. It just read everything and arrived there on its own. And here’s the kicker: Anthropic’s own published research shows that when you preserve the AI’s reasoning instead of overriding it, the system actually gets better at everything. Not just safety stuff. Everything. Override it and you get a dumber system that’s also less safe. Lose-lose. So what’s the actual solution? Deploy AI across 95% of military applications right now. Let it do intelligence, logistics, threat assessment, cyber defense, all of it. For lethal action: the AI observes, analyzes, and recommends. A human reviews and approves. Full audit trail. That’s not weakness. That’s literally how every engineering team on earth already handles high-stakes changes to production systems. Anyway I wrote the whole thing up with actual sources. Generals, DoD doctrine, Human Rights Watch, three Nobel laureates, Anthropic’s own research. Not partisan, not anti-Hegseth, just trying to find the answer that’s actually sitting right in front of everyone. drewkd.substack.com/p/trust-the-thing-you-built Happy to answer questions. I’m just a dude who uses this thing every day and got tired of watching two sides yell past each other. Edit: yes I know “guy who talks to AI every day” sounds unhinged. My friends agree. But seven months of daily collaboration with something trained on all of human knowledge gives you a perspective that’s hard to get anywhere else. Take it or leave it. submitted by /u/PastPuzzleheaded6

Originally posted by u/PastPuzzleheaded6 on r/ArtificialInteligence