Original Reddit post

When you have multiple AI agents working on a complex task, how do you: - Track who’s doing what and why? - Handle failures and recover gracefully? - Integrate results from parallel work? - Maintain audit trails for compliance? - Let humans intervene without blocking automation? Most frameworks solve this ad-hoc. We wrote a formal protocol specification instead. WACP (Workspace Agent Coordination Protocol) defines the rules for multi-agent coordination. It’s transport-agnostic — you can implement it over files, HTTP, message queues, whatever. Core concepts: - Workspaces : isolated execution contexts with hard boundaries - Envelopes & Signals : typed messages and state notifications - Checkpoints : immutable progress snapshots - Tasks : work units forming dependency DAGs - Trail : append-only audit log (every event recorded, no gaps) Design principles: - Capability-based security (permissions are walls, not guidelines) - Immutability by default (checkpoints and trails can’t be modified) - Human oversight is architectural (approval gates and injection points are first-class) - Full auditability (every single event produces a trail entry) - Transport-agnostic (implement it however you want) It’s a 70KB spec. Think of it like TCP/IP for agent coordination — defines what agents can say, how they communicate, what structures they operate in, and how work gets assembled. Spec : https://github.com/Madahub-dev/wacp CC BY-SA 4.0 licensed. Part of the larger Mada project (which also includes MFP for secure agent communication and madakit for multi-provider AI clients). Curious what people think. Are formal protocol specs the right approach for multi-agent systems? Or should we stick with framework-specific implementations? submitted by /u/akilabdu

Originally posted by u/akilabdu on r/ArtificialInteligence