I sat down with Wendell Wallach recently. He wrote Moral Machines, collaborated with Stuart Russell, Yann LeCun and Daniel Kahneman, and has spent 25 years working at the intersection of philosophy, technology and AI governance. His argument isn’t doom and it isn’t hype. It’s more uncomfortable than both. We’re building systems of increasing capability without meaningful accountability structures around them. When something goes wrong the responsibility is so distributed across developers, deployers, regulators and users that nobody ends up truly accountable. He thinks that gap is more dangerous than any capability threshold we might cross in the future. He also challenges the AGI framing directly. A system can be extraordinarily intelligent and have zero moral reasoning. We’re optimising for capability without asking what it’s capable of deciding. The section on autonomous weapons and who bears responsibility when an AI system causes harm in a military context is the most unsettling part of the conversation. Full conversation: https://youtube.com/watch?v=-usWHtI-cms submitted by /u/reesefinchjh
Originally posted by u/reesefinchjh on r/ArtificialInteligence
