Original Reddit post

So OpenAI just rolled out an update to ChatGPT called GPT-5.3 Instant, and the funniest part of the update is not some insane reasoning breakthrough. Source: https://winbuzzer.com/2026/03/04/chatgpt-gpt-53-instant-update-preachy-tone-hallucinations-xcxwbn/ It’s that they apparently had to tune the model to stop sounding like a preachy life coach. For a while people were posting screenshots where ChatGPT would start answers with stuff like “Stop. Take a breath.” or assume the user was emotionally distressed even when they asked something normal. OpenAI basically admitted that the model was sometimes making weird assumptions about the user’s emotional state and responding in a way that felt condescending or preachy. So this update tries to make responses more direct and less… emotionally presumptuous. But the tone change is only half the story. They also claim hallucinations dropped quite a bit. The internal numbers floating around say roughly 26.8% fewer hallucinations when the model pulls from web data and around 19.7% fewer when it relies on its internal knowledge. Now before the “AI still lies” crowd jumps in, yes, hallucinations still exist. That problem is not magically solved. But the interesting thing about AI progress right now is how boring it actually looks. It’s not some dramatic sci fi moment where suddenly machines become superintelligent. Instead it’s these constant small fixes:

  • the model hallucinates a bit less
  • it refuses fewer harmless questions
  • it stops sounding like it’s trying to counsel you through a breakup Each individual improvement sounds minor. But stack enough of these updates together over a couple of years and the experience becomes completely different. A lot of people online still treat AI like it’s some novelty toy that’s good for writing poems and cheating on homework. Meanwhile under the hood these models are quietly becoming more reliable, more usable, and less annoying to interact with. And honestly that last part matters more than people think. Most users don’t rage quit AI because of benchmarks or model architecture. They rage quit because the bot gives a moral lecture before answering a simple question. So OpenAI basically did the most practical product update possible: they made the AI less irritating to talk to. Another funny angle here is that this might actually be one of the first big examples of internet feedback shaping a model’s personality. If you spend time on Reddit, Twitter, or developer forums you’ve probably seen tons of complaints about the “therapy bot tone”. Now suddenly there’s an update specifically addressing it. Which means humanity might be doing something unintentionally hilarious. We are collectively training AI by arguing with it online. The bigger debate though is this. People keep asking “when will AI stop hallucinating completely?” That’s probably the wrong question. Humans hallucinate all the time too. Journalists misreport things. Lawyers cite cases that don’t exist. Doctors misdiagnose patients. The real question is when AI becomes less wrong than the average human Googling something quickly. Because when that threshold is crossed, the entire conversation around AI usefulness changes overnight. Curious what people here think though. Are people overreacting and this whole “tone update” thing is just another small step in the slow grind of making AI actually usable? submitted by /u/biz4group123

Originally posted by u/biz4group123 on r/ArtificialInteligence