Original Reddit post

Rodney Brooks says robots do not need AGI to be useful, and that AGI itself is likely centuries away. Brooks argues that artificial general intelligence, defined as human-level reasoning and understanding, is not a prerequisite for practical robotics. He estimates that AGI is roughly 300 years away and describes it as a moving concept that has shifted over time rather than a concrete technical target. He contrasts AGI with the kinds of intelligence robots actually need today. According to Brooks, value comes from systems that are narrowly designed, reliable, and capable of performing specific tasks safely and consistently. Passing tests or generating convincing language, he says, does not equate to general intelligence. His position is that focusing on AGI distracts from real deployment. Robots can deliver meaningful results now without human-level intelligence, as long as they work predictably in real environments and meet reliability and safety requirements. submitted by /u/Responsible-Grass452

Originally posted by u/Responsible-Grass452 on r/ArtificialInteligence