https://www.businessinsider.com/ai-architects-what-worries-excites-them-about-superintelligence-2026-4 I have this deep unsettling feeling that we are going to be looking back at these last few years of normal life wishing we could go back. I feel like people & AI companies are lying to themselves about AI being safe. Once AI gets to a point where it can work on its own, it’s not going to just listen to rules that humans give them. If AI is going to be smarter than any human, why would it listen to us? In my opinion there’s no possible chance that we are going to be able to control it. Yes it’s very exciting. I think it’s really cool what’s happening, but it will eventually get to a point of it ending all human life. We’re going to end up as the bugs we step on. Really think about it. Do you really believe these people are going to somehow keep us safe from AI? I don’t know if I’m just dumb or something, but I don’t see a way of this ending in a good way for humans LONG TERM. Short term ai is going to be great. But not long term. submitted by /u/ImKiwix
Originally posted by u/ImKiwix on r/ArtificialInteligence
