https://preview.redd.it/91czx0gz07lg1.png?width=2752&format=png&auto=webp&s=454d537bd317b3301d407eb4366d44f2a469b7de Two major things from this week: At the Delhi AI Summit, Altman proposed an IAEA-style body for international AI coordination and audit. 88 countries signed the New Delhi Declaration. The International AI Safety Report 2026 (Bengio, 100+ experts, 30+ countries) flagged that models now distinguish between test and deployment environments — undermining pre-deployment safety testing. The policy conversation is real, but the technical infrastructure is mostly missing. What would an IAEA-for-AI actually inspect? How would evidence be structured for cross-border audit? I’ve been working on AI-HPP — an open standard (CC BY-SA 4.0) that provides engineering specifications for exactly this:
- Evidence Vault with cryptographic hash chains — an AI system’s “black box” that external inspectors can verify
- Regulatory Interface Requirement — structured export format for audit bodies
- Runtime monitoring (not just pre-deployment eval) to counter the evaluation evasion problem Bengio flagged
- Multi-jurisdiction compliance: “most protective rule wins” It’s 12 modules, 6 policies, and reference implementations. Built from work on autonomous systems in Ukraine. No corporate backing, no paywall. https://github.com/tryblackjack/AI-HPP-Standard Interested in whether people think this kind of open baseline could actually gain traction with the institutions being proposed, or if corporate frameworks will dominate. submitted by /u/ComprehensiveLie9371
Originally posted by u/ComprehensiveLie9371 on r/ArtificialInteligence
It “seems to sound” good
