Let me say something slightly controversial: in a space full of “AI will kill us all” headlines, LeCun is almost alone in being willing to publicly say “calm down, we’re nowhere near that.” And yeah, he can be abrasive. But compare that to the parade of researchers and CEOs who’ve built entire personal brands around the doom narrative — many of whom conveniently work at the exact companies that benefit from AI being perceived as this terrifying, world-altering force that only they can responsibly manage. Think about it. If you’re OpenAI, Anthropic, or DeepMind, the “AI is incredibly powerful and dangerous” story: Justifies your funding rounds Positions you as the “responsible adults in the room” Creates pressure for regulations that favor incumbents over smaller competitors It’s not a conspiracy, it’s just incentives. And incentives shape narratives more reliably than malice ever could. Meanwhile LeCun works for Meta, which has its own agenda obviously — but that agenda happens to push against the hype cycle rather than feeding it. I’m not saying AI progress isn’t real or that there are zero legitimate concerns. But the loudest voices in the room are almost always the ones with the most to gain from keeping you scared. Worth keeping in mind next time a “godfather of AI” gives another interview about existential risk right before his company’s next funding announcement. submitted by /u/AlbatrossBig1644
Originally posted by u/AlbatrossBig1644 on r/ArtificialInteligence
