Original Reddit post

Imagine if pharmaceutical companies got to decide which of their own drugs were safe to sell. No independent regulator. No outside testing. Just the company that profits from the drug telling you the drug is fine. You would never accept that. Now look at AI. The organisations leading the global conversation about AI safety - writing the guidelines, setting the standards, advising governments - are almost entirely the same organisations building the most powerful AI systems in the world. OpenAI defines AI risk. OpenAI builds the AI. Google DeepMind defines AI safety standards. Google DeepMind builds the AI. Anthropic talks about responsible development. Anthropic builds the AI. The people grading the exam are the people who wrote the answers. Again - this is not a conspiracy. I’m not saying these people are evil. Some of the most genuinely worried people I know work at these exact companies. Smart, well-meaning, seriously concerned people. But good intentions don’t fix a broken structure. And the structure is broken. In every industry that has ever caused serious public harm - pharmaceuticals, tobacco, aviation, nuclear energy, finance - the pattern is always the same. The industry defines its own safety standards. The industry assures the public everything is fine. And then something goes wrong that independent oversight would have caught. We are doing the exact same thing with the most powerful technology ever built. And the defence is always the same: “trust us, we’re the experts.” They were always the experts. That was never the point. The point is who watches the experts. And right now the answer is: mostly the experts themselves. Does that not bother anyone else. submitted by /u/amazing_guy1708

Originally posted by u/amazing_guy1708 on r/ArtificialInteligence