There’s a new paper that came out last week, “How LLMs Distort Our Written Language” by researchers from MIT and DeepMind. I’ve been sitting with it for a few days and I can’t stop thinking about one specific finding. They ran a study where people wrote essays with varying levels of LLM assistance. The people who used LLMs the most produced essays that were 70% more likely to be neutral on the topic they were supposed to take a stance on. Not balanced. Neutral. As in, their actual opinion got diluted out of their own writing. And the kicker is the participants themselves noticed. Heavy LLM users reported the writing felt less creative and “not in their voice.” So they felt it happening but kept using the tool anyway. I don’t know why but that last part bothers me more than the statistic itself. Like if you handed someone a pen that slowly changed what they were writing and they could FEEL it changing and they just… kept writing with it? That’s weird right? The paper also looked at real-world data. They found 21% of peer reviews at a major AI conference were AI-generated. Those reviews scored papers a full point lower on average and put less weight on whether the research was actually clear or significant. Which if you think about it means AI is already affecting which research gets published and which doesn’t. That’s not hypothetical anymore. I keep connecting this to something I’ve been noticing in my own work. I use Claude pretty heavily for drafting and I’ve caught myself multiple times just accepting a sentence that’s close enough to what I meant but not quite what I meant. It’s subtle. The meaning shifts by like 5% each time. But over a whole document that compounds into something that technically has my name on it but doesn’t really sound like me. The paper actually tested this directly. They told the LLM “only fix grammar, don’t change meaning.” It changed the meaning anyway. Every time. The researchers couldn’t get it to stop doing this even with explicit instructions. I think what’s happening is bigger than a writing style problem. If the tool you use to express your thoughts consistently nudges those thoughts toward the mean, toward neutral, toward “safe”… at what point does that start affecting the thoughts themselves? Not just how you write them down but how you form them in the first place. I dunno. Maybe I’m overreacting. But 70% more neutral is a LOT. That’s not a style change, that’s an opinion change. And it’s happening to people who don’t even realize it’s hapening until someone measures it. Has anyone else noticed this in their own writing? Where you go back and read something you wrote with AI help and it just… doesn’t quite sound like you? submitted by /u/hiclemi
Originally posted by u/hiclemi on r/ArtificialInteligence

