Original Reddit post

I’m sitting here watching the “cancel ChatGPT” movement across all the AI subs, and I’m also reading the after-action reporting around the Iran war, and none of this is making sense to me. Everyone is shouting about what OpenAI agreed to do with DoD, and I keep seeing people told to rally to Claude because it “made a stand”… but that storyline doesn’t line up with reality as it’s being reported. Palantir AIP/Gotham/Foundry + Claude for rapid synthesis, cueing, and planning. I don’t have direct proof of the exact model/toolchain used in this specific operation but if you read Palantir’s own documentation and then look at the outcomes, its clear Claude just identified located and used wifi signals to look inside buildings and find where in the buildings every senior leader on the kill list was standing leading to what might be most effective lethal take down of a nations government in history. So watching people treat Claude as the “ethical alternative” while treating OpenAI as uniquely compromised feels incoherent. And honestly what did people expect? We’re in a race with China. The idea that the military was just going to stay out of frontier AI was never realistic. submitted by /u/Brockchanso

Originally posted by u/Brockchanso on r/ArtificialInteligence