submitted by /u/joed2355
Originally posted by u/joed2355 on r/ArtificialInteligence
I realized this is a bot post, but this is indeed how LLMs work. Their output is slightly randomized and they have no backspace. If they get a word “wrong” and realize it, they can’t undo it.
This is kinda why thinking got so popular. It’s very crude and wasteful, but it is a way to allow LLMs to correct mistakes before actually answering. And Gemini 2.5+ seemed to pick up the habit of correcting in-answer, too.
There is actually some cool research into “backspace” tokens, advanced sampling, and text diffusion models with plenty of chances to correct mistakes. But corporate “AI Bro” LLM development is way more conservative than you’d think; they don’t seem to care about neat research.
