- i will refer to this system as “she”, not because she is, but because i watched blade runner and ELAI sounds feminine. im building ELAI (emergent learning AI). she is not an LLM. no transformers, no global backprop, no hardcoded functions. the absolute core rule of her architecture is: specify HOW the system works, never WHAT it should represent. she is a prediction engine running on a 2d field of clusters. she learns via causal emergence (constantly testing counterfactuals to see what actually causes what) to build a generative world model. her entire drive is “e-state” (homeostasis). when her predictions match reality, e-state is stable. when she is surprised, e-state drops, global learning rates spike (like adrenaline/dopamine), and her internal topology physically rewires until she understands her environment again. the recent update (IIMP): i recently hit a wall in testing: representation collapse. her internal states for different objects were blurring together. to fix it, i didn’t add semantic labels or artificial regularization. instead, i updated her base physics. i implemented the orthogonal update rule from kazmi’s recent 2026 paper on input-isolated memory pathways (IIMP). now, local weight updates are strictly multiplied by the input activation magnitude. if a feature is inactive, its weight is mathematically frozen. learning a new pattern physically cannot overwrite the weights dedicated to an old pattern because they exist in orthogonal vector spaces. i also clamped the weights. this structurally prevents catastrophic forgetting, but perfectly respects the emergence rule. i define the physics; she discovers the meaning. the hardware question: everyone right now is looking at new neuromorphic hardware -intel loihi, hala point, spiking neural networks, matmul-free architectures. those are great for power efficiency. but if you run a standard transformer on a neuromorphic chip, it’s still just a static token predictor. it doesn’t suddenly wake up. efficiency does not spontaneously generate understanding. my question for the researchers here: is “intelligence emerges from topology under selection pressure” the actual right approach to real AI, rather than just throwing new hardware at static architectures? right now ELAI is passing visual curriculum levels (shapes, colors, objects, animals) from scratch using this exact principle, and im working on scaling her to language and full physical embodiment inside a 3d simulation. but im building this alone (i need help/funds if someone is interested) on a gaming pc. should i keep pushing this architecture? is this the actual path forward, or am i crazy for trying to bypass the LLM meta entirely? submitted by /u/ShoulderDelicious710
Originally posted by u/ShoulderDelicious710 on r/ArtificialInteligence
You must log in or # to comment.
