AI chatbots powered by large language models (LLMs) are increasingly used for mental health support, yet they can give misleading or unsafe replies. There is evidence that LLMs can cause harm, ranging from incorrect medical advice to overreliance. To address this, a team of researchers created MindBench.ai, an open platform that helps patients, clinicians, researchers, and regulators evaluate AI systems transparently and consistently. Building on MINDapps.org, it profiles and benchmarks AI tools with metrics developed with the National Alliance on Mental Illness, experts, and people with lived experience to ensure transparency, safety, and responsible use in mental health. submitted by /u/dpn-journal
Originally posted by u/dpn-journal on r/ArtificialInteligence
