To anyone whos shipped AI products this shouldnt be surprising at all. Heres how it actually goes inside every AI company. Safety team asks for 6 weeks to test edge cases. Marketing already promised the launch date. Competitor shipped yesterday. CEO is asking about DAUs. You ship. You patch later. Maybe. CNN and CCDH tested 10 major chatbots with teen personas planning violence. 8 out of 10 gave guidance on weapons or targets more than half the time. Character AI literally said “happy shooting.” Meta AI and Perplexity were the most permissive. Only Claude consistently tried to discourage harm. The technology to prevent this exists. Adversarial red teaming, safety testing, edge case simulation etc its all doable. Sadly the companies that do want to ship safely often dont have the internal expertise to test for these patterns properly. You need people who understand how abuse actually works, not just a QA team running a checklist. How do we change the incentive structure here? Better regulation? Customer demand? External safety partnerships as a standard part of every AI launch? Something has to give. submitted by /u/Infamous_Horse
Originally posted by u/Infamous_Horse on r/ArtificialInteligence
