Original Reddit post

I just saw the announcement and I’m genuinely hyped. SubQ is the first LLM using a fully sub-quadratic sparse-attention architecture (SSA) with a 12 million token context window. It’s processing 1M tokens 52x faster than FlashAttention and costs less than 5% of Claude Opus. They said it focuses compute only on the important token relationships, which makes long-context work way more practical and cheap. This could completely change agentic coding, handling huge codebases, documents, and research without chunking issues. Linear scaling changes the economics big time. Anyone else checking this out? submitted by /u/pretendingMadhav

Originally posted by u/pretendingMadhav on r/ArtificialInteligence