https://skarfinans.com/en/a-google-ai-breakthrough-is-pressuring-memory-chip-stocks-from-samsung-to-micron/ Google just unveiled a new compression technique called TurboQuant, and it sent memory chip stocks tumbling. The technology claims to cut the memory needed for large language models by sixfold. That is a massive reduction. Investors are worried this could slow down demand for AI memory chips. Shares of Samsung and SK Hynix fell around 5 to 6 percent in Seoul. Micron and Sandisk also took a hit in the US. A reminder of how sensitive the AI hardware market is to software breakthroughs. Anyone holding memory chip stocks right now? submitted by /u/AlphaOneYoutube
Originally posted by u/AlphaOneYoutube on r/ArtificialInteligence
This is absolute nonsense on so many levels.
-
It’s just KV cache compression. It doesn’t compress the weights at all.
-
Even then, 4-bit hadamard compression is very low loss and already implemented in inference engines. This paper is better, but it wouldn’t change anything dramatically for my setup.
-
Real frontier models like Google/Deepseek-Qwen (not Grok or OpenAI) already use hybrid attention, and move father away from even needing KV cache every iteration.
-
There are hundreds of incredible papers on order-of-magnitude efficiency improvements, way bigger than this.
It’s just investors being stupid, and YouTube hype channels doing their thing.
-
