MiniMax just released Mavis with Agent Teams. Claude Code shipped Agent View. OpenAI has Agents SDK, Google has ADK. Every major AI company converging on the same thing: making agents work together requires infrastructure, not better prompts. The Mavis tech blog crystallized something I’ve been feeling: “multi-agent systems are runtime, not prompt orchestration.” The questions that matter aren’t “what should the agent do next” but “who assigns tasks, what happens when something blocks, who verifies completion.” The Verifier role in Mavis is the most interesting design decision. In single-agent setups, the agent is both worker and reviewer. Unsurprisingly, it approves its own work most of the time. Mavis separates Worker and Verifier with different objective functions. Worker wants to complete. Verifier wants to find problems. The tension between them constrains quality. Pretty elegant tbh. They’re also honest about costs: multi-agent has three overhead categories that single-agent doesn’t. Handoff cost (re-organizing information between agents), sharing cost (full context sharing explodes the window), and aggregation cost (merging 10 outputs into one deliverable). More agents ≠ better results automatically. This tracks with my experience. I’ve been running multi-agent workflows through Verdent for a few months. Subagent architecture works well for tasks with natural boundaries: research vs implementation vs testing. But for tightly coupled work, a single strong agent with good context often outperforms a team that spends half its tokens on coordination. 2026 might be the year the industry admits prompt engineering has diminishing returns and starts building the runtime layer underneath. submitted by /u/Soggy_Limit8864
Originally posted by u/Soggy_Limit8864 on r/ArtificialInteligence
