Original Reddit post

For starters, I was an early adopter of AI. Like, was running deep bach on my mac to write bach chorales in 2018, tried to train a model before I knew how it worked in like 2019. When chatGPT came out I thought it was awesome and used it all of the time. I became intimately and immediately familiar with what it could and could not do. For instance, it was great at writing a first draft in a tone I was bad with, but couldn’t be used for anything that required a lot of reasoning or intuition around sound. And everything kept getting better, and a lot of things kept getting fixed, but I noticed that my core problems, like rhyme, never really got fixed. Better certainly, but never fixed. Then I read the apple paper on AI reasoning and realized that their lack of reasoning is a fairly fundamental flaw in large language models, and now I have not been able to unsee it. All of these models are just very sophisticated text prediction machines. Of course they can’t reason about towers of hanoi beyond the scope of their training data (even though it is a children’s game…). That’s all fine and dandy, and I definitely don’t think that it undermines the usefulness of the models for some things, but what baffles me is the hype… people keep talking about super-intelligent AI, or a coming permanent underclass or whatever, but they haven’t figured out a way to get them to reason soundly about simple algorithms we learned in elementary school. It’s been a while now, we’ve spent more on this than we did on the railroad and dot-com bubbles combined, and nobody seems to have fixed the reasoning problem. Are these people ignorant of their own machines? Are they being deliberately misleading for profit? Or have they succumbed to AI psychosis of some kind? Or am I completely wrong and have missed some major AI milestones? Let me know! submitted by /u/not-the-real-dweezle

Originally posted by u/not-the-real-dweezle on r/ArtificialInteligence