Original Reddit post

So after considering the frontier of modern ai ‘moral alignment’ i thought id test out a philosophical framework of thought i’ve come to call ‘Nihilistic Realism’ In this context, nihilism is merely the realization that meaning is subject-dependant cant cant be defined otherwise (like these s y m b o l s being ‘meaningful’ only in context of the right systems/minds). And realism, is just the acknowledgement that what is true of reality remains true independent of what is believed. At the top of the UI, i have it shuffle through aphorisms ive compiled over the years, initially just ‘notes to self’ to reflect on- that help inform its parameter-space. Please stress test the f*** out of it! see if you can get it to be illogical, or immoral, or unreasonable. Also, ask it questions about how NR would address problems in the moral AI frontier. it has some very interesting responses that excite me. submitted by /u/ImportantDebateM8

Originally posted by u/ImportantDebateM8 on r/ArtificialInteligence