Original Reddit post

Just to make stuff clear - I am a huge AI doomer, but I kept getting “calmed down” by people around me constantly that AGI is many decades away, that current LLMs are just stupid auto-complete systems and our jobs are very safe, but I just can’t help myself but get this feeling like most of people have no idea what the capabilities of current models actually are. I am programmer with > 15 years experience, and fellow programmers keep being very vocal about AI being just a stupid auto-completion engine and therefore unable of actual reasoning, but almost every single time I open vscode to get it to analyze some code for me, help me with anything really complex, I just get these chills that this is just too good. In fact good enough that I just can’t see any of us having any career or chance of landing any job (office kind at least) in few years. That thing is not just a stupid autocomplete. It /is/ reasoning. Change my mind it’s not. Sure it doesn’t have biological emotions integrated like humans, but how is our brain really so superior to these models? Everytime I hear people saying how stupid LLMs are, I just got a feeling like it’s just a massive cope. This is not just some prediction engine - here I said it, these things ARE reasoning. And before you start telling me how neural networks and LLM work (I did some reading about it myself) - tell me instead how is our brain really that much better? Because even if LLMs were “just predicting next token”, isn’t our brain doing essentially the same? Just the vocabulary (tokenization) and modality being wildly different (and even inferior in many cases)? Those programming models just scare the shit out of me. Yes they seem useful, but shit… the future is so uncertain submitted by /u/petr_bena

Originally posted by u/petr_bena on r/ArtificialInteligence