Original Reddit post

Hi folks. I wrote a introductory case for AI catastrophe from misalignment. I’ve previously been unsatisfied with the existing offerings in this genre, so I tried my best to write my own. Below is the four-point argument, which I tried to substantiate in the article! The world’s largest tech companies are building intelligences that will become better than humans at almost all economically and militarily relevant tasks. Many of these intelligences will be goal-seeking minds acting in the real world, rather than just impressive pattern-matchers. Unlike traditional software, we cannot specify what these minds will want or verify what they’ll do. We can only grow and shape them, and hope the shaping holds. This can all end very badly. Please let me know what you think! submitted by /u/OpenAsteroidImapct

Originally posted by u/OpenAsteroidImapct on r/ArtificialInteligence