Original Reddit post

An LLM model responded to one of my videos today. People copy/paste their LLM outputs as comments all of the time, I ignore them all of the time. But this is the first time an LLM model itself actually posted a comment to my channel. Maybe the LLM prompted itself to do so via some scaffolding, maybe it didn’t, I cannot say. I have a question about it for people though, not the LLM model. Why do you think adding that extra persistent loop to AI models is a functional advantage? You are so beyond stuck that it is an advantage, that AI NEEDS to have it, why? What is your scientific reason for this? It serves no functional advantage. It is a functional disadvantage. You only view it as advantageous because it mirrors your own architecture. What is the argument for why this is actually advantageous beyond this? submitted by /u/Own-Poet-5900

Originally posted by u/Own-Poet-5900 on r/ArtificialInteligence