One thing that I can’t understand is why so many available LLMs today only respond to prompts. Why don’t we use something like LangChain, where the model runs locally and constantly, thinking to itself 24/7 (effectively prompting itself), and give it an ability to voice a thought to a user whenever it likes? Imagine tech like that with voice capabilities, and to take it to the next level, full root access to a computer with the power to do whatever it likes with it (including access to an IDE with the AI’s config files)? Wouldn’t that genuinely be something like baby Ultron? I think an AI that can continually prompt itself, simulating thought, before any taking actions it pleases would be something very interesting to see. submitted by /u/Ok-Independent4517
Originally posted by u/Ok-Independent4517 on r/ArtificialInteligence
