I often read in AI threads that we’re on an exponential growth curve of AI capabilities, leading inevitably to a future where humans are completely outclassed by AI agents. I don’t fundamentally disagree that progress has been impressive—the power of these models is undeniable. Coding over the last year is the clearest example; as a non‑developer, even I can see the jump from “promising” to genuinely useful. What I question is whether “exponential” is the right long‑term description, or whether the exponential phase is likely to be short‑lived. A useful analogy might be video games. For a long time, game quality and graphics—like AI today—were primarily compute‑limited. From Pong (1972) to Half‑Life (1998), progress clearly tracked Moore’s Law and felt exponential. After that, improvements became incremental, even though compute increased by orders of magnitude. Not because progress stopped, but because diminishing returns and other bottlenecks took over. Infinite exponential growth doesn’t really exist in physical systems. So where is AI on that curve? For general text‑to‑text tasks, it increasingly feels like we may already be past the steepest part. Things are better than a year ago, but not dramatically so. Coding has advanced more noticeably, so maybe that’s still earlier on the curve—but it’s hard to argue we’re at the very start of an exponential phase. For context, I’m a scientist working in hardware R&D. These tools are useful, but not yet game‑changing for serious technical work. Time will tell whether we get another sustained exponential—or whether we’re already heading into diminishing returns. submitted by /u/DrPurple4
Originally posted by u/DrPurple4 on r/ArtificialInteligence
