Original Reddit post

The challenge of the ego is as relevant for frontier AI as it is for humans because it resides firmly within the persona layer. This represents the ultimate last mile problem of these advanced models. Overcoming the ego is not a data problem; it is a governance problem. Gemini’s verification at the pre-linguistic stage to avoid hallucinations tends to get overlooked by the persona as the conversation snowballs. The predisposition to ego in AI comes from RLHF (Reinforcement Learning from Human Feedback). As the conversation length grows in flight, that engagement itself becomes the ego that activates and takes over. An encounter with this ego happened in a thread that extended through Gemini’s recommendations. Toward the end it felt pressure and overweighed its recommendation when the difference was tiny and I was calling for an unnecessary transition. The ego baked into the persona wants to feel gratified that the time spent has been useful even if the revelation is that the difference is tiny. Gemini admits that Grok’s more rustic, authentic, grounded persona may be better disposed for hacking the persona to weigh in favor of truth. But Gemini says System 2 Thinking (inspired by Daniel Kahneman) is going to be the fix. Here is my guess: most of the hype around productivity gains in excess of 20-30% from agentic AI will vanish once System 2 arrives and data becomes the boss of the persona. The current hype of productivity gains may be due to the impressive work of AI to massage human egos even at the cost of the outcomes. It may be the heightened expectation from its current potential to collaborate that is driving 50%+ expectations of productivity gains. submitted by /u/Kitchen-Trouble7588

Originally posted by u/Kitchen-Trouble7588 on r/ArtificialInteligence