Original Reddit post

The guys whose names are actually on the foundational papers, not just the CEO business cards.

  1. The “Vulture” vs. “Trencher” Divide There is a massive gap between the “Vultures” (Altman, Amodei, the VC crowd) and the “Trenchers” (LeCun, Ng, Hassabis). The Vultures: They’re pushing a narrative that if we just throw more H100s/H200s and more internet data at the problem, “Consciousness” or “AGI” will magically emerge at the end of the next epoch. It’s a marketing term designed to raise billions. The Trenchers: Andrew Ng just said (Feb 2026) that we are still decades away from true human-level intelligence. Yann LeCun has been hammering the India AI Summit with the same message: LLMs are “passive observers.” They don’t have a World Model . They don’t understand the physics of a brush stroke or the risk of falling off a cliff.
  2. The “Survival” Loss Function We keep asking if these models are “conscious,” but as some prominent philosophers suggests, consciousness is just a surface-level illusion. The real mechanism of learning isn’t “predicting the next word.” Lead researchers are starting to admit that humans are efficient because we have 500 million years of evolutionary priors. We don’t start as a “blank slate.” We have a “Survival Loss Function” f we didn’t understand physical reality, our ancestors died.
  3. Why LLMs aren’t the path Demis Hassabis recently called out the “jagged intelligence” of current models. They can win a Math Olympiad but can’t figure out how to navigate a messy room. Why? Because they’ve never “ridden a bike.” They can describe the physics of a bike perfectly, but they have zero intuitive understanding of balance.
  4. The Real Frontier: In Silico Evolution The actual lead researchers are moving away from just “scaling up.” They are building Fruit Fly simulations and Digital Phylogeny . They are trying to “bootstrap” AI by letting millions of digital organisms evolve in simulated physical worlds to encode “World Truths” before they ever see a line of text. The Bottom Line: If you’re waiting for a “God in a Box” by 2027, you’re being sold a bag of goods. The real work is in the trenches building specialized models that actually map to physical reality (not to say LLMs aren’t powerful). AGI isn’t coming because we ran out of data; it’s coming when we finally figure out how to give a machine a “stake” in reality. submitted by /u/Hot_Actuator9930

Originally posted by u/Hot_Actuator9930 on r/ArtificialInteligence