I have been following the fallout of the Anthropic vs Pentagon standoff and it is the most honest thing I have seen in tech for years. For those out of the loop: The US government just tried to bully Anthropic into dropping their safety guardrails for military use. Anthropic basically told them to sod off. Why? Because their own CEO admits that current AI is simply not reliable enough to remove humans from the process of making critical decisions. This is coming from the people who built the damn thing. If the most advanced AI on the planet is not trusted by its own creators to handle high-stakes tasks without a human truth layer then why the hell are we letting it run our entire lives? We have reached a point where we use AI to write everything and then use other AI to filter it. We are automating the human element out of existence and then wondering why the results are absolute slop. I am a dev and I got so fed up with this “bot-on-bot” feedback loop that I started building wecatch. We are literally about to launch and the whole point is to bring that “human in the loop” back to the table. We do not use more models to fix your work. We use a structured process with 10+ independent human reviewers to strip out the robotic artifacts and make sure the intent actually sounds like a person. It is basically the “spine” that Anthropic is talking about but for your professional life. You can join the waiting list and see how we are doing it here: https://wecatchai.com/human-review I am being fully upfront. This is a promotion for what I have built. I am putting it here because I think Anthropic has finally drawn a line in the sand and more of us need to do the same. submitted by /u/Alert-Tart7761
Originally posted by u/Alert-Tart7761 on r/ArtificialInteligence
