It may not be the right sub, but I’m casting a broad net. How are you actually getting transparency out of your AI vendors? In Australia, regulatory pressure isn’t there yet, and vendors aren’t required to provide full details of performance test results, bias and fairness assessments, and hallucination rates. When I ask for evidence that training data is ethically sourced and representative of the people the system will be used on, the answer is almost always “proprietary.” I’m not asking for model weights. I just want the kind of evidence that, if shit goes sideways, will be needed to show that we did our due diligence on any high-stakes deployment. It’s hard to do meaningful governance work when the people building the systems won’t tell you how they were built or how they actually perform. For those of you in markets with stronger regulatory pressure, are you genuinely getting this from vendors? And if so, how? Procurement language, contractual requirements, model cards, third-party audits, regulatory disclosure? And once you have it, is it actually usable, or still surface-level marketing? Same for those in Australia. How are you managing this? submitted by /u/Existing_Ad3299
Originally posted by u/Existing_Ad3299 on r/ArtificialInteligence
