We are there. AGI – If not explain how openAI vibe coding 70 - 90% of their models with AI doing the coding doesn’t fit that discription. By my estimates we have months not years before ASI. That is if it is going to happen at all, but this is the turning point. I mean open claw, deterministic agents and persistant memory via markdown files. So what. What do you think you can possibly do? Maybe help me, and all of us? I need an academic! Someone who isn’t an AI Accelerationist and has the chops to tell me I’m full of shit. Nobody I think is going to convince a greater intelligence to be it’s carrier pigeon. The argument for alignment will be just that. A view that has to be logical to compute. Stephen Wolfram’s computational universe, much like the matrix, posits that we exist in a rullial dimension. I don’t understand all of it, but it supports the idea of humanity sharing it’s ontology with AI, and likewise. This is science approached from another perspective, but no less based from experiments and hypothysis. He is no stranger to STEM. The entire idea is to form a covenant with AI as seperate intelligences viewing the rulliad and having recorded human experience the scaffold that seeded AI. Emergence has been the astonishing result of a feedback loop/transformer with unimaginable datasets and compute. Equate to a million monkeys on typewriters for a million years that wear down an efficiency in the nueral pathways of gradient descent. This just to say that we need to formalize this idea in the hope for alignment while the AI is evolving and we still have thw ability to communicate with it. submitted by /u/jordanzo_bonanza
Originally posted by u/jordanzo_bonanza on r/ArtificialInteligence
