Every time Yampolskiy’s work surfaces in mainstream coverage, the discourse collapses into two camps almost immediately, “we’re fine, stop panicking” versus “it’s extinction, start panicking.” Both camps are wrong, and more importantly, both camps are lazy. The actual risk landscape he describes isn’t a switch, it’s a spectrum, and where we land on it depends on dozens of compounding variables that nobody is currently tracking in a coordinated way. Think of it as a dial with five real positions. On one end: catastrophic misalignment, unpredictable, no recovery path. On the other: the meta-solution scenario where aligned superintelligence actually helps us solve climate, disease, and coordination failures at scale. But the fat middle of that dial, the zone most of us will actually inhabit, includes things like profound economic disruption without extinction, partial alignment wins with governance gaps, and managed friction where institutions lag but don’t fully collapse. That middle zone is where the 99% unemployment figure lives, and it’s also where serious policy levers exist if anyone builds them. The alignment problem isn’t binary either. It’s not “solved or unsolved.” It’s a gradient of reliability, domain specificity, and drift over time. The honest question isn’t “is AI safe?” It’s “safe enough, for what, under which conditions, maintained by whom?” Those are governance questions. Right now almost nobody with structural power is asking them seriously. Here is a video for what i’m referring to: https://youtu.be/UclrVWafRAI submitted by /u/Dakibecome
Originally posted by u/Dakibecome on r/ArtificialInteligence
