Original Reddit post

After launching our first big AI powered feature it has been way more stressful than expected. Our recommendation engine is producing biased outputs and nonsensical suggestions that break user trust. Basic testing was not enough and we are doing daily damage control without any AI safety experts on the team. We need practical ways including AI content moderation to proactively identify and mitigate issues before they blow up not just react after the fact. For those who have shipped AI features recommendation systems chatbots generation tools or similar what has actually worked for you to catch bias hallucinations and harmful or nonsensical outputs early submitted by /u/Kitchen_West_3482

Originally posted by u/Kitchen_West_3482 on r/ArtificialInteligence