One of the strangest parts of current AI progress is how models can solve complex coding tasks, generate realistic media, or explain advanced topics, then completely fail at something that seems simple or obvious. Sometimes it’s basic logic, missing context, confidently wrong answers, or mistakes a human wouldn’t normally make. It feels like capability is growing fast, but reliability is growing much slower. Why do these systems improve so dramatically in some areas while still struggling in others that seem easier on the surface? Is this mainly a training issue, an architecture issue, or just how intelligence works at scale? submitted by /u/NoFilterGPT
Originally posted by u/NoFilterGPT on r/ArtificialInteligence

It’s because the “reasoning” is probabilistic within a database , not based on logic or math or actual reasoning. It doesn’t think, it guesses. It’s not intelligent, it’s an effective translation and text search device. So yes, training, architectural, and “just how it works” problems. There are potentially more effective architectures for machine learning out there but the llms don’t use them.