Everyone says AGI is near. Just scale the models. More data, more compute, more time And eventually intelligence. I’m not quite convinced What we’re building isn’t a mind. It’s a pattern machine. LLMs, image generators, video models they all do the same thing, find patterns in data and reproduce them. Text, pixels, frame just different formats. That’s really impressive But it’s not thinking. It’s remixing. And there are a few problems for me that don’t go away ( atleast for me) They’re pattern machines, not thinking machines These systems are trained for specific types of data. They don’t understand they specialize. AGI shouldn’t be specialized. Hallucinations aren’t the real issue They can be reduced. Teach models to admit uncertainty, and most of the damage is gone. Not perfect but manageable. Prompt injection is a real flaw You can override these systems with input. They can’t reliably separate instructions from data. Its not a flaw its how they’re structured No real generalization As Gary Marcus said; they interpolate, they don’t extrapolate. They stay within what they’ve seen. Push beyond that, and things break. So no I don’t think scaling this leads to AGI. We’ll get better tools. Smarter outputs. More convincing results. But not real intelligence. If AGI happens, it’ll come from something fundamentally different, systems that reason, not just predict. For now the most realistic path to human level AI might not be machines getting smarter but humans lowering the bar or redefining what we mean by sufficient submitted by /u/Muted-Still-8511
Originally posted by u/Muted-Still-8511 on r/ArtificialInteligence
