I know LLMs are basically inference machines, they work with tokens etc but with the new neuromorphic hardware being used like Intel Loihi, or like the Hala Point from Sandia National Labs for example, will the future or AI go away from large language models and start going towards human biology inspired architectures ? Like Spiking Neural Networks, MatMul-free LLM and Continuous Learning Architectures. Maybe using pixel as the input and not tokens… or literally other types of inputs like humans have several… Transformers are wasting power moving data around, and that true intelligence requires sparse connectivity, local processing, and maximizing the Information-to-Energy (I/E) ratio, Hala Point solves this by building a custom physical brain. Or when we replace the LLM architecture we will probably have AGI already ? submitted by /u/ShoulderDelicious710
Originally posted by u/ShoulderDelicious710 on r/ArtificialInteligence
