Original Reddit post

We’re at a point where AI research moves so fast that a headline is enough to move billions without anyone stopping to understand what was actually published and this is becoming a real problem. 2025 January Deepseek dropped and memory stocks crashed. Last month TurboQuant dropped and they crashed again. Tens of billions in market cap gone in 48 hours. And both times if you just actually read what was published you’d know the thesis was not that strong. With DeepSeek the assumption was efficient AI means less memory demand. But when inference got cheaper it didn’t shrink the market, it expanded who could afford to deploy AI at all. More startups, more products, more models running. Memory demand went up and stocks recovered. With TurboQuant it’s even simpler. The algorithm compresses something called the KV cache, basically the memory an AI holds of your conversation while it’s running. Genuinely useful. But it has zero impact on training memory which is where the actual majority of HBM demand comes from. The $180B hyperscalers are spending on memory this year is mostly training spend. Which is completely untouched. And the paper has been sitting since 2025, plus Google hasn’t even deployed it widely yet. Nobody needed a PhD to figure this out. The information was right there in the paper. You can read the whole analysis here. Link submitted by /u/Cool-Ad4442

Originally posted by u/Cool-Ad4442 on r/ArtificialInteligence