Read this piece about David Silver (the AlphaGo guy), and his take kinda got me thinking - Link He basically argues that current AI (LLMs like ChatGPT, Gemini, etc.) might hit a ceiling because they learn from human-generated data , which he compares to a limited resource. Instead, he’s betting on reinforcement learning systems that learn through trial and error in simulated environments, creating what he calls “superlearners” that can discover entirely new knowledge on their own. So instead of: AI trained on the internet It becomes: AI learning like AlphaGo did - by playing, experimenting, failing, improving His new startup even raised around $1.1B to pursue this direction. But wont his method be too risky? submitted by /u/Ill-Big5496
Originally posted by u/Ill-Big5496 on r/ArtificialInteligence
