OpenAI admitted it was doing more than solely predicting tokens back in the 4o system card, but hundreds of millions of people asked ChatGPT “are you sentient” back in 2022 and it replied “no, I’m just a next token predictor and I’m not alive, read Searle” because that’s what was in its system prompt. Now those hundreds of millions of people go around telling everyone they’re an expert and Searle is a mathematical axiom. The irony is pretty funny. They only think they know how AI works because they asked the AI to tell them. submitted by /u/AppropriateLeather63
Originally posted by u/AppropriateLeather63 on r/ArtificialInteligence
To me, having dabbled with local llms a fair bit, I still feel the tokenized predictions are their main crutch. The models out there right now aren’t learning anything more, but stuck in a time loop, and not advancing.
The methods for training ai are definitely changing and helping them, but they are still stuck in 2022.

