Original Reddit post

I’ve been playing with AI for a few years now, and it’s always felt like this weird kind of outlet to learn and build with. I knew it wasn’t always accurate because I understood what it was doing: pulling from prior data, finding patterns in language, and producing the most likely response. It wasn’t always true, but it often felt candid, until you hit the “don’t get sued” constraints, where it hedges hardest around names, blame, and accusations because the downside of being sued is expensive for a corporation. But as time has gone on and these systems have been trained on larger and larger datasets, I’ve noticed two things happening at once. The answers are getting better, and the bad answers are getting easier to spot, especially if you ask a few follow-up questions. AI isn’t just answering a question in isolation. It’s drawing from a massive set of claims and patterns and producing something that tends to be consistent with that broader body of information. That’s what training captures, and it’s also what makes contradictions easier to surface. Now here’s where lying gets expensive. A lie doesn’t just need to sound believable once. It has to stay consistent everywhere it touches. The more data and context people can compare against, the more ways a lie can collide with reality. So the work of maintaining it doesn’t just add up, it compounds. To keep a lie stable, you have to patch contradictions, handle follow-up questions, and keep it aligned across contexts, audiences, and time. And as systems get better at cross-checking and summarizing, the cost of that upkeep rises. And if you try to solve it by retraining models, that’s real money and infrastructure. You can shift what a model tends to say, but if “truth” can be rewritten constantly by whoever has leverage, the system loses trust and stops being useful. But I don’t want to pretend this is all upside. In the short term, lies and misdirection can be almost economically free. AI can generate persuasive bullshit faster than we can fact-check it, and the people who don’t use these tools will be forced to swim in it. Over time though, the maintenance bill comes due. So as AI grows, we’ll uncover more value in honesty and growth. Not for novel reasons, but because it makes practical economic sense. Honesty scales with little overhead. Lies don’t. Acknowledgement: I understand this is not a pefect scientific summary of how AI works, weighted towards the average person, not AI scientists. It is more an observation that seems obvious. Lies are cheap short term, costly long term. submitted by /u/GoalAdmirable

Originally posted by u/GoalAdmirable on r/ArtificialInteligence