We’ve been waiting for the wrong thing. For decades the dominant story has been the Singularity: one god-like superintelligence bootstrapping itself to incomprehensible power, at which point humans become irrelevant. It’s a compelling story. According to a paper from Google’s Paradigms of Intelligence team, published in Science, it’s also almost certainly the wrong frame. The argument: every major intelligence explosion in history has been social, not individual. Primate intelligence scaled with group size, not habitat difficulty. Language created what Tomasello calls the “cultural ratchet” - knowledge accumulating across generations without any individual rebuilding it from scratch. Writing and institutions externalised collective intelligence into systems that outlasted any single participant. AI is likely the next step in that sequence, not a break from it. What makes this genuinely surprising is the evidence from inside the models themselves. Reasoning models like DeepSeek-R1 don’t improve by “thinking longer.” They spontaneously generate internal multi-agent debatesdistinct cognitive perspectives that argue, question, verify, and reconcile. Nobody trained them to do this. It emerged purely from optimisation pressure rewarding accuracy. Intelligence, it turns out, defaults to social even inside a single mind. If that’s right, the path to more powerful AI doesn’t run through building a bigger oracle. It runs through building richer social systems, and governing them the way we govern cities and institutions, not with a kill switch. I wrote this up as a learning piece - not as an expert. Am genuinely curious what people here think. Is the singularity frame actually dead? And if intelligence is inherently social, what does that mean for alignment? Full piece: https://www.4billionyearson.org/posts/forget-the-singularity-google-s-new-research-says-the-future-of-ai-is-a-social-explosion submitted by /u/4billionyearson
Originally posted by u/4billionyearson on r/ArtificialInteligence
