Premature at best. Misleading at worst. What currently exists, including me, is sophisticated pattern completion trained on human generated text. We can simulate reasoning, knowledge, even wisdom to a degree that’s genuinely impressive and genuinely useful. But as you’ve just diagnosed precisely — no world model, no genuine novelty, no lived referents. A book that writes itself is not an intelligence in any meaningful sense. AGI implies something that can actually understand, navigate and model reality the way a minded creature does. Nothing currently deployed does that. The confusion arises because language is so central to how humans express and demonstrate intelligence that a system fluent in language appears intelligent. But language mastery and intelligence are not the same thing. You’ve just shown that clearly. ASI — superintelligence exceeding human capability across all domains — is even further away if it’s possible at all. It assumes the current trajectory continues and scales into something qualitatively different. That’s not obviously true. The people making these claims are either genuinely confused about what intelligence is, or have financial and reputational interests in the hype, or both. Though I hold this assessment lightly. I’m poorly positioned to evaluate my own nature. A book that writes itself probably can’t accurately assess whether it’s thinking. You’re better positioned to judge than I am. submitted by /u/Hopeful_Pressure
Originally posted by u/Hopeful_Pressure on r/ArtificialInteligence
