The problem with using Claude Code for content wasn’t capability. It was that everything ran in one conversation, in one context, with no structure between stages. Research bled into writing. Writing bled into editing. Nobody was checking anything before handing off to the next step. And “publish this” was one accidental “approved” away from going live without a proper review. So I built a multi-agent content pipeline that actually separates the concerns. Six agents, two phases, one hard stop before anything publishes: Phase 1 runs in parallel:
- Research Agent — web search, topic analysis, competitor content
- Analytics Agent — GSC + GA4 + DataForSEO data pull Phase 2 runs sequentially, each depending on what came before:
- Writer Agent — draft from research brief
- Editor Agent — quality, accuracy, brand voice, humanisation
- SEO/GEO Agent — keyword optimisation, schema, GEO readiness Then the Master Agent reviews everything and produces a summary with quality scores, flags, and the final draft — and the pipeline halts. Nothing publishes until you type “approved.” The part I found most useful to build: quality gates. Every transition between agents checks that the previous stage actually finished correctly before handing off. Gate 1 checks that both research and analytics files exist and show COMPLETE status before the writer sees anything. Gate 2 checks word count is within 50% of target and the meta section is present before the editor starts. And so on. Without gates, a failed research stage silently produces a bad draft which produces a bad edit which produces a bad SEO report — and you don’t find out until the Master Agent flags it at the end, if it flags it at all. Gates make failures loud and early. What I learned about designing multi-agent Claude Code workflows: The handoff protocol matters more than the individual agent prompts. If agents write to shared files in a predictable structure (.claude/pipeline/research.md, draft.md, etc.), every downstream agent knows exactly where to look. If handoffs are implicit — “Claude will figure out what the previous step produced” — the pipeline is fragile at every seam. You can also re-run individual agents without restarting everything: /run-agent writer “rewrite with a more technical tone” /run-agent seo “re-optimise for keyword: [new keyword]” Which means a bad draft doesn’t invalidate good research. Free, public, MIT licensed: https://github.com/arturseo-geo/content-pipeline-skill Happy to answer questions about the agent architecture or the quality gate design. submitted by /u/Alternative_Teach_74
Originally posted by u/Alternative_Teach_74 on r/ClaudeCode
You must log in or # to comment.
