Wrote up my 6 main strategies here . But the bottom line is that my approach is much more conservative than most of the approaches I see here. I wanted to show how I do it as an aging Millennial, on a Monorepo that has everything a modern TypeScript stack can have. Nx for monorepo management NestJS for backend microservices Angular for frontend applications MySQL (Sequelize ORM) for databases Redis for caching Docker for containerization Kubernetes/Helm for deployment Although a Monorepo is the best option for AI-assisted development. There is a great article from the Nx team. I personally think they do an awesome job with monorepo management and address how to organize the architecture around AI-assisted development. So I am trying to automate as much as possible and have the code written and reviewed by the agent, but I am still not there yet. For a greenfield project, like my blog, I did very little revision, but in real-world scenarios, I just wasn’t able to pull it off. TL;DR:
- I don’t use autonomous loops for production code
- I tried ROLF loops. The results weren’t convincing for the code I need to maintain. Planning matters, but I stay in control and approve every change.
- Plan mode is essential
- I read and edit the plans before accepting them. Add constraints, remove unnecessary steps. I try to be specific about what you want. Saves massive amounts of tokens by fixing bad code later. Here is a cool guide for prompts: https://www.promptingguide.ai/
- Custom agents + project-specific skills
- Built a Google Search Console analyzer agent for SEO planning. Use MCP servers (Atlassian, MySQL) for integrations. Created project-specific skills files that describe Next.js patterns I want Claude to follow.
- Different models for different tasks
- Sonnet 4.6 or Opus for complex architectural decisions and unfamiliar libraries. Haiku for boilerplate, refactoring, and repetitive changes. No reason to burn expensive tokens on simple work.
- Explicit > implicit
- Never hope Claude does what you want. Tell it explicitly. Example: “Use the Docs Explorer agent to check BetterAuth docs before implementing Google OAuth. Store tokens in PostgreSQL. Follow our error handling patterns in /lib/errors.”
- I verify everything (and give Claude tools to verify it)
- I review all code. But also give Claude tools: unit tests, E2E tests, linting, Playwright MCP for browser testing. AI sometimes writes tests that pass by adjusting to wrong code, so I review tests too. The main lesson: AI is amazing for productivity when you stay in control, not when you let it run autonomously. This has been my experience. That being said, I do have APM for deep thought. Happy to answer questions about using Claude Code for healthcare/production work or maintaining AI-assisted codebases long-term. submitted by /u/bratorimatori
Originally posted by u/bratorimatori on r/ClaudeCode
