Been building a developer tool for internal business apps entirely with Claude Code for the last 40 days. Not a weekend project - full stack with auth, RBAC, API layer, data tables, email system, S3 support, PostgreSQL local and cloud. No hand-written code - I describe what I want, review output, iterate. Yesterday I ran a deep dive on my git history because I wanted to understand what actually happened over those 40 days. 312 commits, 36K lines of code, 176 components, 53 API endpoints. And the thing that stood out most wasn’t a metric I expected. The single most edited file in my entire project is CLAUDE.md. 43 changes. More than any React component. More than any API route. It’s the file where I tell Claude how to write code for this project - architecture rules, patterns, naming conventions, what to do and what to avoid. I iterated on the instructions more than I iterated on the code. That kinda hit me. In a 100% AI-generated codebase, the most important thing isn’t code at all. It’s the constraints doc. The thing that defines what “good” looks like for this specific project. And I think it’s exactly why my numbers look the way they do: Feature-to-fix ratio landed at 1.5 to 1 - way better than I expected. The codebase went from 1,500 to 36,000 lines with no complexity wall. Bug fix frequency stayed flat even as the project grew. Peak week was 107 commits from just me. Everyone keeps saying “get better at prompting.” My data says something different. The skill that actually matters is boring architecture work. Defining patterns. Setting conventions. Keeping that CLAUDE.md tight. The unsexy stuff that makes every single prompt work better because the AI always knows the context. That ~30% of work AI can’t do for you? It’s not overhead. It’s the foundation. Am I reading too much into my own data or are others seeing this pattern too? submitted by /u/Competitive_Rip8635
Originally posted by u/Competitive_Rip8635 on r/ClaudeCode
