Original Reddit post

On April 6th, Sam Altman compared his AI policy paper to the New Deal. Also on April 6th: — Claude.ai had widespread login and chat failures — ChatGPT went down for two hours — My cross-model consistency tests showed the first confirmed degradation from baseline I’ve been running the same three logic tests across 7 LLM nodes every day since April 5th. Same tests. Same format. Scored by me, not the models. Day 0: every node passed everything. Day 3: spatial reasoning errors appearing across multiple nodes. Output variance on identical inputs climbing daily. One node went from clean pass to full refusal in 48 hours. Nobody is measuring this systematically. OpenAI is writing policy papers. The rest of us are just noticing things feel off. The policy conversation assumes the underlying systems are stable. My data suggests that assumption needs testing. 30 days. Fixed methodology. Will post results daily. Run your own tests. Compare. Divergence is the signal. https://fortune.com/2026/04/06/sam-altman-says-ai-superintelligence-is-so-big-that-we-need-a-new-deal-critics-say-openais-policy-ideas-are-a-cover-for-regulatory-nihilism/ submitted by /u/WillowEmberly

Originally posted by u/WillowEmberly on r/ArtificialInteligence