A friend told me something last week that I haven’t been able to stop thinking about. His coworker dropped a 3,000-word Gemini output into a shared doc before a meeting. Subject line: “Here’s what AI thinks — let’s start from here.” My friend spent 40 minutes reading it. Found three factual errors, two logical contradictions, and an entire paragraph that was technically true but completely useless. The kind of paragraph that sounds like it means something until you realize it says nothing. The coworker had spent 30 seconds.
There’s a term for this: slop. Not just “low quality content” — something more specific. Content where the energy required to consume it far exceeds the energy required to produce it. The producer offloads the cognitive labor. The reader absorbs it. What’s insidious isn’t the quality. It’s the redistribution of labor. When you paste unfiltered AI output into a meeting doc, you’re not being “transparent” or “efficient.” You’re outsourcing your thinking to whoever has to read it. You’ve decided your time is worth more than theirs.
I’ve been thinking about this alongside something David Abram wrote recently. His thesis: the machine didn’t take your craft. You gave it away. He’s been in software for years. His point: the hard part of this job was never the typing. It was understanding the system. Debugging the unintuitive failure. Designing architecture that doesn’t collapse under load. Making calls where you own the outcome. LLMs can suggest code. They can fill boilerplate. They can be a sounding board. What they can’t do: understand why a decision is right. Hold the context. Feel the weight of getting it wrong. That part? Still yours. Whether you practice it or not.
Here’s the mechanism I think is quietly wrecking people: Every time you skip the step of asking “is this worth sending?” — you let that muscle atrophy. Curation requires judgment. Judgment requires practice. Skip it enough times and you genuinely lose the capacity. Meanwhile, if you’re on the receiving end of slop all day, your attention is getting shredded by high-volume low-density input. You have less left over for the hard thinking. You’re atrophying too. It’s a bilateral decay loop.
The usual defense is “it’s just a tool, it’s how you use it.” Sure. But tools shape behavior. When a behavior becomes nearly free, it proliferates. That’s not technological determinism — it’s economics. The more interesting question is what we’re quietly redefining as “work.” When KPIs become “reports generated per day.” When speed, not judgment, gets rewarded. When “I did 3x the output” means you generated 3x the volume — the capabilities that actually create value (understanding systems, making hard calls, owning consequences) get systematically devalued. You don’t get replaced by the AI. You just gradually become someone who doesn’t need to be replaced. Because your job becomes copy-paste and prompt-tuning.
The part that stings most, I think: You can let AI draft. You can’t let AI decide whether something is worth doing. You can let AI list options. You can’t let AI carry the weight of choosing wrong. You can use AI to go faster. But if your direction is off, you’re just reaching the wrong destination more efficiently. The weight of choosing, the responsibility of judgment, the ownership of consequences — none of that disappears. It just waits for you. After you close the tab. And if you’ve been avoiding it long enough, one day you realize you no longer know how to decide. The machine didn’t hollow you out. You did it first.
What do you think? Is there a distinction worth drawing between AI as a drafting tool vs. AI as a thinking replacement? Or is that a line that’s already collapsed in most workplaces? submitted by /u/Quick-Knowledge1615
Originally posted by u/Quick-Knowledge1615 on r/ArtificialInteligence
You must log in or # to comment.
