Sharing a workflow in case it’s useful to anyone else exploring agentic coding loops.
The setup is one orchestrator agent (issue-resolver) that handles a GitHub issue end-to-end. It spawns subagents for one job each and pauses three times for my input:
FLOW:
→ fetches the issue
→ explores the codebase, writes an architecture doc
→ drafts a plan
🟡 I review the plan. Add notes. Approve.
→ implements the plan
→ runs /ultrareview on its own diff
🟡 I look at the findings. Accept the real ones, skip the ones I disagree with.
→ applies the accepted fixes, runs tests
🟡 I check the final diff before push.
→ pushes, opens the PR.
I showed it on a small Spring Boot demo project I built called LinkStash (URL shortener with API key rate limiting + link expiration).
The Human gate mattered. The agent flagged two real engineering decisions during planning —> token bucket vs fixed window for rate limiting, and whether to return 400 on past expiry timestamps, instead of guessing. That “I don’t know, you decide” approach makes this more reliable.
Three gates feels like a lot when you watch it. But for anything I’d actually ship, I’d rather take the extra time than push code I haven’t read
I’m using my own MCP server for fetching issues (built it for an earlier project), but the official GitHub MCP server has a gh_get_issue tool that does the same thing. Or you could use a skill pick whatever fits your workflow.
I’m sure there are better ways to structure this. Genuinely curious how others are running their agentic workflows. What’s been working for you?
(Full walkthrough video and a Medium write-up if anyone wants the links — happy to share in a comment, just didn’t want to drop them in the body.)
submitted by
/u/OrewaDeveloper
Originally posted by u/OrewaDeveloper on r/ArtificialInteligence
