So a new law (in Tennessee) prohibits AI systems from representing themselves as qualified mental health professionals. “Representing themselves” being the key words here. I’m lowkey fuming, because it’s meaningless in practicality. No one is thinking they’re talking to a licensed therapist. The problem is that vulnerable people in mental health crises are getting advice and emotional guidance from a system that has zero accountability and no ability to actually intervene when things go wrong. So what solution did the government think is best? Slap a disclaimer on it. “hi, I’m an AI, not a real therapist!” It will change fuckall, people already know that. The problem is the content the AI shares. We’ve already banned profanity and NSFW content because the content itself is harmful. So why are we treating AI mental health advice differently? Why is the regulatory bar “don’t lie about your credentials” rather than “don’t dispense clinical mental health guidance without oversight”? TL;DR New law stops AI bots from claiming to be a licensed therapist. But it doesn’t stop them from giving clinical advice and guidance without accountability. That’s a hell of a loophole imo. Source submitted by /u/DeFiNomad1007
Originally posted by u/DeFiNomad1007 on r/ArtificialInteligence
