Original Reddit post

I am a little obsessed with AI from a philo perspective and I’m researching a sci-fi story with an AI antagonist and I was thinking about the core premise. The AI achieved “perfect optimization” across a complex system. But it can’t model the internal state that produces irrational and/or creative choice. Even more specifically, art or music created as a response to suffering rather than art just being a “functional output.” It can obviously observe behavior, it can catalog product, and it can use said product instrumentally i.e. monetize and distribute the art as a control mechanism or propaganda. What it cannot do is model WHY a person would choose to create something painful and personal rather than something optimized and comfortable. The motivation is outside its own parameters. It can understand and quantify feeling, it can understand that it can’t understand, but it can’t feel. It will never know what it feels like to both feel and not feel. So I come here because my question is from an AI philosophy or alignment perspective. Is this a fundamental limitation for an advanced system? Or does it eventually fall apart under scrutiny? I’m aware of arguments around qualia, consciousness, and the hard problem. I’m less sure about whether “modeling motivation for irrational creativity” is genuinely outside the reach of “optimization” or just currently outside the reach of current modern systems. Genuinely curious what people who think seriously about all this stuff believe. submitted by /u/abloobudoo009

Originally posted by u/abloobudoo009 on r/ArtificialInteligence