Original Reddit post

His ‘cargo cult’ idea has been a big influence on many working scientists. But to what extent does it apply to the AI field? As far as I can tell, Feynman’s epistemology assumes that understanding bottoms out somewhere — in quantum field theory, in particle interactions, in something with determinate structure. Does that hold for AI? The “mechanism” isn’t fixed, here. LNNs don’t have that, right? They have statistical regularities that shift with data, scale, and context. The thing being modeled isn’t a fixed phenomenon waiting to be understood. It’s a moving target that partially constitutes itself through the modeling process. In addition, the training data is itself a historical artifact of contingent social processes. [“Contingency” does a lot of work in the social sciences.] So… opinions? https://nautil.us/what-would-richard-feynman-make-of-ai-today-1262875 “Much of today’s artificial intelligence operates as a black box. Models are trained on vast—often proprietary—datasets, and their internal workings remain opaque even to their creators. Modern neural networks can contain millions, sometimes billions, of adjustable parameters. One of Feynman’s contemporaries, John von Neumann, once wryly observed: “With four parameters I can fit an elephant, and with five I can make his tail wiggle.” The metaphor warns of mistaking noise for meaning. Neural networks produce outputs that look fluent, confident, sometimes uncannily insightful. What they rarely provide is an explanation of why a particular answer appears, or when the system is likely to fail. This creates a subtle but powerful temptation. When a system performs impressively, it is easy to treat performance as understanding, and statistical success as explanation. Feynman would have been wary of that move. He once scribbled on his blackboard, near the end of his life, a simple rule of thumb: “What I cannot create, I do not understand.” For him, understanding meant being able to take something apart, to rebuild it, and to know where it would break. Black-box systems invert that instinct. They invite us to accept answers we cannot fully reconstruct, and to trust results whose limits we may not recognize until something goes wrong.” submitted by /u/AngleAccomplished865

Originally posted by u/AngleAccomplished865 on r/ArtificialInteligence