I’ve tried to cover this better in the article attached but TLDR… the standard control problem framing assumes AI autonomy is something that happens to humans - drift, capability overhang, misaligned objectives. the thing you’re trying to prevent. Georgetown’s CSET reviewed thousands of PLA procurement documents from 2023-2024 and found something that doesn’t fit that framing at all. China is building AI decision-support systems specifically because they don’t trust their own officer corps to outthink American commanders under pressure. the AI is NOT a risk to guard against. it’s a deliberate substitution for human judgment that the institution has already decided is inadequate. the downstream implications are genuinely novel. if your doctrine treats AI recommendation as more reliable than officer judgment by design, the override mechanism is vestigial. it exists on paper. the institutional logic runs the other way. and the failure modes - systems that misidentify targets, escalate in ways operators can’t reverse, get discovered in live deployment because that’s the only real test environment that exists. also, simulation-trained AI and combat-tested AI are different things. how different is something you only discover when it matters we’ve been modeling the control problem as a technical alignment question. but what if the more immediate version is institutional - militaries that have structurally decided to trust the model over the human, before anyone actually knows what the model does wrong? submitted by /u/Cool-Ad4442
Originally posted by u/Cool-Ad4442 on r/ArtificialInteligence

