Original Reddit post

Anthropic dropped Claude Security in public beta on April 30th, but only for Enterprise customers right now. Team and Max plans will get it “soon” apparently. From what I can see, it lets you point Claude at a whole GitHub repo (or folder/branch), scan for vulnerabilities, validate the findings to cut down on false positives, and then spit out explanations plus suggested patches that you can review in Claude Code. Sounds nice on paper. The docs page they linked goes on and on about how secure everything is - strict read-only by default, sandboxed bash, prompt injection defenses, permission systems, the whole nine yards. But here’s my slightly skeptical take If they have to lock it down this hard with all these guardrails and explicit approvals, does that mean the AI still isn’t trustworthy enough to actually run autonomously on real codebases? We’re basically getting a fancy vulnerability scanner that still needs a human babysitter for every single change. Also, locking the public beta behind the most expensive tier first feels a little… convenient? Like they’re testing it on the customers who already pay the most while the rest of us wait. Don’t get me wrong, better than nothing and the false-positive reduction angle is interesting. But I’m curious what actual Enterprise users who got early access are seeing. Is it genuinely finding real issues that traditional tools miss, or is it mostly good marketing with heavy human oversight required? Anyone in the beta already? What’s your honest take- game changer or just another AI tool that sounds better than it performs in practice? submitted by /u/IllAnnual7167

Originally posted by u/IllAnnual7167 on r/ClaudeCode