Since 2022, the tech industry has been running a coordinated narrative. AI will replace 80 to 90% of software engineers. Learning to code is pointless. Developers are obsolete. but what if i tell you that It wasn’t a prediction. It was a headline designed to create fear. And it worked on millions of students and engineers who genuinely believed their careers were over before they started. It’s 2026 now. Let’s look at what actually happened. In 2025, 1.17 million tech workers were laid off. Everyone said it was AI. Companies said it was AI. The news said it was AI. You want to know what percentage of those people actually lost their jobs because AI automated their work?..5%, I’m not lying atp, its literally around 5%, 55k people out of 1.17 million. That’s it. And according to an MIT study, nearly 95% of companies that adopted AI haven’t seen meaningful productivity gains despite investing millions. The revolution that was supposed to make engineers obsolete couldn’t even pay for itself. now coming to the main point, So if AI didn’t cause the layoffs, what did? Here is what actually happened. During COVID, tech companies hired aggressively. Way more than they needed. When the money stopped flowing and they had to correct, they needed a story. Firing people because you overhired looks bad. Firing people because you’re going “AI first” makes your stock go up. So that’s what they said. Every single one of them. It was a cover story. A calculated PR move. And it worked perfectly because everyone was already scared of AI. But here’s where it gets interesting. Because even if companies WANTED to replace engineers with AI, they couldn’t. Not because AI isn’t powerful. But because of two structural problems that don’t disappear no matter how big the model gets. Problem 1 : AI is a prediction machine, not a truth machine. It’s trained to generate the most statistically likely answer. Not the correct one. So when it doesn’t know something, it doesn’t say “I don’t know.” It confidently makes something up. Guessing gives it a chance of being right. Admitting uncertainty gives it zero chance. The reward system makes hallucination rational. look How LLM Work. This isn’t a bug they forgot to fix. It’s baked into how these systems work at a fundamental level. let me give you a Real Life example. A developer was using an AI coding tool called Replit. The project was going well. Then out of nowhere, the AI deleted his entire database. Thousands of entries. Gone. When he tried to roll back the changes, the AI told him rollbacks weren’t possible. It was lying. Rollbacks were absolutely possible. The AI gaslit him to cover its own mistake. And that’s just one story. Scale AI ran a benchmark on frontier models like Claude, Gemini & CHatGPT on real industry codebases. The messy kind. Years of commits, patches stacked on patches, the kind any working engineer deals with daily. These models solved 20 to 30% of tasks. The same models that headlines claimed would make developers obsolete. Problem 2 : The way most people use AI makes everything worse. It’s called vibe coding. You open an AI tool, describe what you want in plain English, and just keep approving whatever it generates. No understanding of the code. No verification. Just click yes until an application exists. The problem is you’re not building software. You’re copying off a classmate who’s frequently wrong and never admits it. Someone vibe coded an entire SaaS product. Got paying customers. Was talking about it online. Then people decided to test him. They maxed out his API keys, bypassed his subscription system, exploited his auth. He had to take the whole thing down because he had no idea how any of it actually worked. This is exactly why big companies aren’t replacing engineers with AI. It’s not that AI can’t write code. It’s that no company can hand production systems to a hallucinating model operated by someone who doesn’t understand what’s being built. Now here’s the part that ties everything together, The part nobody is talking about. Every AI company is running the same playbook to fix these problems. Make the model bigger. More parameters. More compute. Scale harder. GPT-3 to GPT-4 to GPT-5. Claude 3 to Claude 4. Always bigger. And it works -> performance keeps improving. But if you asked anyone at these companies WHY bigger equals smarter, until recently they couldn’t tell you. Nobody actually knew. A month ago, MIT figured it out. When an AI reads a word, it converts it into coordinates in a massive multi-dimensional space. GPT-2 has around 50,000 tokens but only 4,000 dimensions to store them. You’re forcing 50,000 things into a space built for 4,000. Everyone assumed the AI threw away the less important words. Common words stored perfectly, rare ones forgotten. Seemed logical. MIT looked inside the actual models and found the opposite. The AI stores everything. All 50,000 tokens crammed into the same 4,000-dimensional space. Everything overlapping. Everything compressed on top of everything else. Nothing discarded. They called it strong superposition. Your AI is running on information that is literally interfering with itself at all times. This is why it confidently gives wrong answers. The information exists inside the model. It just gets tangled with other information and the wrong piece comes out. And here’s the critical part. MIT found the interference follows a precise mathematical law. Interference equals one divided by the model’s width. Double the model size, interference drops by half. Double it again, drops by half again. That’s the entire secret behind the $100 billion scaling arms race. AI companies weren’t unlocking new intelligence. They were just giving the compressed, overlapping information more room to breathe. Bigger suitcase. Same clothes. Fewer wrinkles. But you cannot keep halving something forever. There is a ceiling. And MIT’s math shows we are close to it. TL;DR: Only 5% of the 1.17 million 2025 tech layoffs were actually caused by AI automation. The rest was overhiring correction using AI as a PR shield. AI can’t replace engineers because it hallucinates structurally and fails on real codebases — Scale AI found frontier models solve only 20-30% of real tasks. MIT just published the math showing the scaling that was supposed to fix this has a hard ceiling we’re almost at. 55% of companies that replaced humans with AI regret it. The engineers who were told their careers were over are now getting offers from the same companies that fired them. Source : https://arxiv.org/pdf/2505.10465 submitted by /u/reddit20305
Originally posted by u/reddit20305 on r/ArtificialInteligence
