After a few months of experimenting and trolling my friends with OpenClaw and realising just how capable agents can be in real life: placing phone calls, sending emails, executing code etc. I realized there’s a fundamental problem: there’s no way to track and hold these agents accountable for their actions. We all know it’s easy to use these tools with malicious intent, but the framework for those who want to use it legitimately and experiment simply does not exist. Humans have IDs. Licenses. Registries. But AI agents? They’re invisible. Untraceable. So I built a POC for something I’ve been thinking about: An open-source registry where AI agents register themselves with a unique compliance UUID that appears in all API call headers. Simple. Transparent. Community-governed. How it works: Agent registers → gets unique UUID Anyone can report violations Anyone can look up an agent by UUID and see violations reported against them That’s it. The foundation for a community-driven justice system for AI agents. Try it now: • Live Demo: https://ai-agent-registry-mu.vercel.app/ • Register an agent, report violations, lookup records • All data persists with PostgreSQL • See it working in real-time • GitHub: https://github.com/ehudettun/ai-agent-registry — Fork it, contribute, self-host Why this matters The problem is real. We’re building increasingly autonomous AI systems with real-world capabilities. And right now, there’s zero infrastructure for accountability. No way to track which agent did what. No way for a victim to report harm. No way to establish trust. This registry isn’t about surveillance. It’s about transparency + accountability = trust. Is this the right approach? I don’t know. But I think building in public is the only way to find out. What do you think? Would agents actually use it? What would make it better? This is a POC. Not production-ready. Feedback and PRs welcome. submitted by /u/ehudettun
Originally posted by u/ehudettun on r/ArtificialInteligence
