Guys, we need to talk about the future of AI and why we are all screwed if companies do this. Most of future AI will probably be humanoid robots. Humanoid robots literally have different technology from an LLM; they are created for specific tasks, for example: Home, Industrial, Warehouse, etc. They do not depend on LLMs as a cognitive brain since they have machine learning technologies, and others trained for that and such, they know what to do if they are well trained, they are not merely probabilistic and are not the same as LLMs. If companies start putting LLMs as the cognitive brain of a humanoid robot that only needs to be trained for specific tasks, we run the risk of the robot starting to hallucinate. And this is already not tolerable in text, etc., as it exists today, let alone in practice when doing household chores for example. What’s more: Every time companies release a new LLM model, the model seems worse in practice than previous models in every way. We have to stop this. Companies are literally trying to perpetuate a problem, this time in humanoid robots. Humanoid robots do not need ONE LLM. submitted by /u/NoBit4395
Originally posted by u/NoBit4395 on r/ArtificialInteligence
