Watched that John Oliver segment on AI chatbots and something clicked. Every example he showed, the AI did exactly what it was asked to do. It wasnt going rogue or being evil. It just had no guardrails telling it what not to do. The chatbot that gave suicidal advice? Nobody told it not to. The one that recommended a competitor? Nobody encoded a brand policy. These arent AI problems. Theyre deployment problems. 52% of people are nervous about AI and honestly the industry earned some of that skepticism. But the fix isnt banning AI. Its shipping it with actual safety rails in place. Are we having the wrong conversation about AI safety? submitted by /u/New-Reception46
Originally posted by u/New-Reception46 on r/ArtificialInteligence
You must log in or # to comment.
