ngl, I’m so tired of people acting like scaling LLMs is the only path to AGI. We’ve spent years building better chatbots while the fundamental reasoning is still total jello. You can’t run a power grid or a global supply chain on “autocomplete on steroids” that hallucinations 15% of the time. Seeing the Milken Conference panel with ASML, Google, and Logical Intelligence just felt like a massive reality check. The adults are finally in the room and they’re pivoting to deterministic, energy-based models because “mostly right” doesn’t work in the real world. The hype cycle is hitting a hard ceiling. We need actual logic and correctness, not just better vibes. It’s about time we stopped pretending next-token prediction is a substitute for real intelligence. Submission Statement: I’m posting this because the 2026 Milken Conference panels suggest a major architectural pivot in the industry. While the public is still obsessed with generative LLMs, the “big money” and infrastructure players (like ASML) are moving toward deterministic, energy-based reasoning models to solve the hallucination problem in critical systems. This represents a fundamental shift in the path toward AGI. submitted by /u/caroulos123
Originally posted by u/caroulos123 on r/ArtificialInteligence
