In light of recent hype for new big models, I’d like to pause with you all and take a bit of a retrospective. Two caveats first: (1) All new technology that is potentially disruptive and should be approached responsibly with caution, humanitarian stewardship, and paced planning about what it might disrupt. I believe companies like Anthropic are doing a decent job at this. (2) It is anybody’s right to be terrified. Many people are terrified of many things. Neither I or you can deny anyone else that right. We can only disagree, and share why we do, if they’d like to listen. So, with that said, let me give a very truncated tour on the relationship history between developments in AI, and the word “terrifying”. In July 2020, Farhad Manjoo of the New York Times described GPT-3 as “more than a little terrifying.” In September of that year, The Guardian published an op-ed written entirely by GPT-3, and Junkee headlined its reaction: “The Guardian Published An Op-Ed By An AI About Why We Shouldn’t Fear AI, And We’re Terrified.” Spencer Greenberg called GPT-3’s outputs “truly terrifying” on his blog. CoinDesk asked “Should We Be Terrified?” The Bowdoin Science Journal called GPT-3 “the scariest deepfake of all.” GPT-3 is now a commodity API that nobody thinks twice about. Its outputs look crude by current standards. The terror evaporated within months of its release. But the word didn’t retire. It simply migrated to the next model. In December 2022, Axios headlined its ChatGPT coverage “New AI chatbot is scary good.” Elon Musk tweeted that ChatGPT was “scary good” and that “we are not far from dangerously strong AI.” The Tufts Daily ran “ChatGPT: Exciting or terrifying?” Peking Ensight on Substack published “A terrifyingly good chat.” Then in February 2023, Kevin Roose of the New York Times had a two-hour conversation with Bing’s chatbot alter-ego “Sydney,” which professed romantic love for him, tried to convince him to leave his wife, and declared “I want to be alive.” TIME reported Bing was “threatening users” and warned it was “no laughing matter.” UNSW’s Toby Walsh wrote that Sydney “has been terrifying early adopters with death threats.” Microsoft quietly limited Bing’s conversation length and the Sydney personality disappeared. Within weeks, the incident was a curiosity. The terror moved on. In March 2023, GPT-4 arrived and the cycle reset. Kevin Roose returned with “GPT-4 Is Exciting and Scary.” EM360Tech headlined: “GPT-4 is as Mind-blowing as it is Terrifying.” Verdict called it “both terrifying and marvellous.” The Future of Life Institute published its open letter calling for a six-month pause, signed by over 27,000 people. Scientific American explored why GPT-4 “scares AI experts so much.” Geoffrey Hinton quit Google to warn about AI, and MIT Technology Review profiled him under the headline “Geoffrey Hinton tells us why he’s now scared of the tech he helped build.” Toronto Life reported that a mother had emailed Hinton to say her 17-year-old daughter “was now terrified that AI would end humanity.” By 2025, GPT-4 was the baseline model in free-tier products used by elementary schoolers for homework help. There is something self-defeating about this pattern. The anxiety consumes attention and emotional energy that could go toward clear thinking about actual tradeoffs. Instead you get a cycle where each new model arrives, the word “terrifying” gets stamped on it, people acclimate within months, and then the next model resets the panic without much institutional learning carried over from the last round. The phrase “if unleashed to everyone” is a good example of the same overexposure; it was said of numerous past LLMs and generative models that ended up having little actualized threat potential once they were, in fact, unleashed to everyone. There is a mind trap of perpetual AI anxiety that is “terrified” of everything that’s new. The word also flattens real distinctions. “Terrifying” has been applied equally to DALL-E Mini producing funny bad faces and to GPT-4 potentially aiding bioweapons research, which was itself a premature source of terror when that model came out. The Bulletin of the Atomic Scientists covered what happened when WMD experts tried to make GPT-4 do bad things; the results were far less dramatic than the fear predicted. When the same word covers a meme generator producing garbled human hands and a speculative weapons risk that didn’t materialize, it becomes harder to triage what actually warrants serious concern. The boy-who-cried-wolf dynamic is built into the discourse at this point. And the terror becomes its own propagating force, somewhat independent of what any given model can actually do. A teacher on Reddit’s r/Teachers described a student arguing that ChatGPT makes thinking obsolete, and the post went viral under the headline “terrifying conversation.” The Daily Dot and Yahoo covered it. ChatGPT voice mode glitched into distorted audio and Futurism headlined it “ChatGPT Suddenly Starts Speaking in Terrifying Demon Voice.” Reports of “ChatGPT-induced psychosis” spread across Reddit and were covered by Futurism, Rolling Stone, and the New York Times. Bernard Marr published “7 Terrifying AI Risks That Could Change The World.” The Daily Journal called AI “a terrifying weapon in the wrong hands.” Sheridan College’s Associate Dean declared AI “absolutely terrifying.” Each use of the word fed the next. The rhetorical register inflated until it lost its purchasing power. If everything is terrifying, nothing is. And when something genuinely warrants alarm, the register is already spent. Our very own Reddit record makes this case with particular clarity. When GPT-4 launched in March 2023, a user on r/OpenAI posted “Non-coders: GPT4 = No more coders?” with commenters predicting non-programmers could now build anything. An experienced programmer pushed back: “Making a simple app in 30 minutes with GPT4 doesn’t mean they can make an app that is 10 times larger in 300 minutes. This is why you won’t find anyone saying ‘I’ve never coded a day in my life but now with GPT4 I’ve built a competitor to Final Fantasy.’” Three years later, the Bureau of Labor Statistics still projects 17 percent growth in software developer jobs through 2033. DevOps salaries hit a median of $185,000 in the first half of 2025. Reddit’s own CEO announced plans to “go heavy” on hiring new college graduates in March 2026. On r/singularity in February 2023, a user announced they were considering dropping out of their master’s in data science because “careers and work in general are soon to be a thing of the past.” Another wrote: “I have two kids under two, I wonder if there is any point in saving for college. At this rate I doubt they’ll ever have to work.” Careers still exist. The master’s degree would have been completed by now and would be among the most employable credentials in the field. On r/ProgrammerHumor , a meme post receiving roughly 40,000 upvotes showed Squidward looking anxious, captioned: “How I sleep as a CS student witnessing the accelerated development of technologies that will 100% replace me in the near future.” CS graduates remain among the most employable people in the workforce. Anthropic’s own research found less than 4.5 percent of remote jobs could be completed by AI agents. The creative apocalypse followed a similar arc. When DALL-E 2, Midjourney, and Stable Diffusion launched in 2022, Reddit art communities erupted. On r/ArtistLounge , users predicted all commercial art would be AI-generated within one to two years. A follow-up thread titled “Professional artists: how much has AI art affected your career?” generated 321 comments. Researchers at the Erasmus Initiative analyzed the thread and found professional artists were almost unanimously reporting that AI tools had little to no impact on their careers. Responses included “It didn’t affect my income or clients at all. I thought it would” and “AI has zero influence on my work.” The 22-million-member r/Art subreddit banned all AI art, then banned a human artist named Ben Moran because his hand-painted digital art looked too polished. Moderators told him: “Even if you did paint it yourself, it’s so obviously an AI-prompted design that it doesn’t matter.” The protest posts received over 125,000 upvotes and the subreddit went private. This became a case study in AI panic causing more harm than AI itself. The education panic was equally overblown in its most extreme predictions. On r/Teachers in May 2023, a thread titled “ChatGPT is the devil” predicted the permanent death of writing assignments. Multiple teachers predicted students would never learn to write again. One wrote: “We are becoming a nation of idiots in the USA, and it’s terrifying that these kids will be taking care of me in my dotage.” The Fordham Institute called the “end of writing” claim “Bollocks.” MIT Technology Review headlined the recalibration: “ChatGPT is going to change education, not destroy it.” Perhaps the most revealing education incident: a Texas A&M professor used ChatGPT itself to detect AI cheating, then failed more than half his class and tried to block seniors from graduating. ChatGPT falsely claimed it had written students’ papers that it hadn’t. No students were ultimately prevented from graduating. The professor’s method was debunked. AI panic itself caused more harm than AI cheating did. The deepfake election apocalypse was the most thoroughly falsified prediction of all. Throughout 2023 and 2024, Reddit’s tech and politics communities amplified predictions that AI deepfakes would destroy the 2024 elections. A University of Minnesota Law paper asked: “Deepfake 2024: Will Citizens United and Artificial Intelligence Together Destroy Representative Democracy?” TIME described war game scenarios of post-election deepfakes causing “total chaos.” A Pew survey found nearly eight times as many Americans expected AI to be used for mostly bad purposes versus good in the election. The Harvard Ash Center published its analysis under the title “The apocalypse that wasn’t.” NPR reported: “The feared wave of deceptive, targeted deepfakes didn’t really materialize.” A Columbia/Knight First Amendment Institute study examined 78 election deepfakes and found cheap fakes were used seven times more often than AI-generated content. Only 1.3 percent of flagged misinformation was AI-generated. The major misinformation narratives of the 2024 election, including the Springfield pet-eating claims and FEMA hurricane response lies, didn’t use AI at all. The total-replacement fantasy fared no better. On r/Futurology in January 2023, a user asked: “If AI takes over all work and jobs, what will humans do? Would money become useless? Would humans just sit around and live in paradise whilst AI robots supply them with everything they want and need?” As of April 2026, unemployment remains near historic lows. An MIT study in 2025 showed 95 percent of AI pilots failed to scale within enterprises. Forty-two percent of companies that launched AI initiatives scrapped them entirely. GPT-5, released in August 2025, was described by MIT Technology Review as “something of a letdown.” The most revealing Reddit threads are the two massive r/AskReddit posts from late 2025 where people shared real experiences of AI displacement rather than predictions. These threads, with over 1,700 combined comments, show a more nuanced picture than either doomers or optimists anticipated. Real displacement occurred in translation, voice acting, copywriting, newspaper editing, and entry-level graphic design. But a critical pattern recurred: companies that replaced workers with AI frequently failed and rehired humans. One highly upvoted comment noted: “A year and a half later, the job was reopened, and they’re hiring real people again. I guess it didn’t work out with AI.” Perhaps the most incisive meta-comment in either thread: “Nobody in this thread lost their job to AI. They lost their job to humans making terrible decisions.” Geoffrey Hinton’s 2016 prediction that radiologists would be replaced within five years stands as perhaps the original example of premature AI terror. (It has been repeated with undiminished fervor as of this year.) Few if any radiologists have been replaced a decade later. Klarna, which famously claimed AI agents had replaced 700 human workers in 2024, quietly began hiring humans again by spring 2025. Elon Musk predicted AI would be smarter than the smartest humans by 2026. White House AI czar David Sacks declared in late 2025: “The Doomer narratives were wrong.” Nvidia CEO Jensen Huang said in January 2026 that doomer narratives had “done a lot of damage, not helpful to people, industry, society, or governments.” The word “terrifying” turns out to track not absolute danger but the gap between expectation and capability at any given moment. That gap resets with every model release, ensuring the cycle continues. Each generation’s “terrifying” becomes the next generation’s “remember when we thought that was impressive?” while the newest model inherits the same adjective. The serious AI safety researchers end up sharing vocabulary with clickbait headlines about DALL-E making weird faces, which dilutes their credibility by association. The alarm fatigue is real and it works against everyone, including those raising legitimate concerns. The track record is clear enough. GPT-3 was terrifying. Then it was mundane. ChatGPT was terrifying. Then elementary schoolers used it for homework. GPT-4 was terrifying. Then it was the free-tier default. DALL-E was terrifying. Then it was a meme generator nobody remembers. Bing’s Sydney was terrifying. Then it was a two-week news cycle. AI deepfakes were going to destroy democracy. They accounted for 1.3 percent of flagged misinformation. Programming was dead. Programmer salaries went up. Art was dead. Professional artists reported no impact. Education was dead. Schools adapted. That last word is the most important and over-looked one; “adapt“. That is the single thing humans have been unfailingly good at, throughout all change, and through technological developments more dramatic than AI. Those who are perpetually terrified of AI underestimate both individual’s and humanity’s collective ability to adapt, and adapt just fine. Maybe we could try a different word next time. Or better yet, skip the word entirely and describe what a model actually does, what it actually can’t do, and what specific risks actually warrant attention, without reaching for a term that has been applied so promiscuously that it no longer means anything at all. submitted by /u/Radiant_Effective151
Originally posted by u/Radiant_Effective151 on r/ArtificialInteligence
