Original Reddit post

Context: Solo dev, TypeScript/Node app, continuously shipping new features and bug fixes. I use an AI coding agent (Claude) for most implementation. No dedicated QA. My goals are simple: New features work as expected Existing features don’t regress Looking for inputs on how to think about this holistically — not just “write unit tests.” Specifically: What I’m wrestling with: Granularity : Unit vs integration vs e2e — where does the ROI actually sit for a solo project? I’ve seen advice that goes all over the place. Timing : Should tests be written before the feature (TDD), alongside it, or as a post-ship pass? Does this change when an AI agent is writing the code? Ownership : Should the coding agent write tests as part of its task, or should a separate review/testing pass happen after? What breaks when the same agent writes the code and the tests? Sustainability : What’s a realistic, low-overhead process that actually holds up as the codebase grows — not just “write tests for everything”? What works for you in practice? Especially curious from anyone who’s integrated AI agents into their dev loop. submitted by /u/swagatk

Originally posted by u/swagatk on r/ClaudeCode