Original Reddit post

Most of us started the same way with Claude Code. Prompt, wait, review, fix hallucinations, prompt again, maybe get stuck in a bug for a while. I was doing the same things over and over and wondering why it didn’t feel that much faster than just writing the code myself. After many months of iteration, I started to find things that work and consistently move things forward at a genuinely fast pace - here’s what actually made the difference, YMMV. Documentation, not the model, is doing the heavy lifting This was the single biggest contributor to getting the right results for me. When people say “Claude got nerfed” or “the model is worse,” nine times out of ten the model is choking on context, not declining in capability… but even if it is, make the problem the model has to solve, simpler. I hit this hard on an application with a huge amount of data + relationships. Claude kept duplicating object structures, reinventing existing patterns. And when you start getting a couple sessions of good output, it’s easy to get wrapped up in shipping fast and stop scrutinizing what’s generated. When the codebase gets larger, keeping what matters in context gets harder. Claude Code is damn good at parsing a codebase, but once I started maintaining real documentation, the output quality jumped. This is just architecture, requirements, and implementation docs with an index, referenced from CLAUDE.md , not spec-kit, not beads, just be sure to maintain it. The model stopped guessing because it didn’t have to - docs require less context than interpreting code. This ain’t magic yet, the same things that make codebases maintainable for teams work with CC. Plan before you code, every session I pressure-test a plan with Claude before writing a line of implementation. This sounds obvious but I used to skip it on smaller features and it always bit me, even if I didnt realize it until later on. Review the plan doc every session, not just the first one, keep it maintained, tell CLAUDE.md to help you maintain it, it will help. Git worktrees for parallel workstreams Each Claude Code session gets its own branch in its own directory via git worktrees and each can run their own environment, just assign new port(s) for web apps. I might have a UX enhancement branch, 2 new feature branches, and a polish branch all active at once. While one track is inferencing, I’m spinning up the next or validating the last. (Tip: add a sound to the claude Stop hook, in settings.json ). This is where the speed actually comes from, you’re waiting anyway, this is the right time to be reviewing output, diffs, or running functional tests on your worktree. TDD, but watch for the shortcut problem Write tests first, you can write them in english but think about what features you want to protect, what is critical? TDD gives the model an executable definition of “done.” But here’s the thing that burned me: Failing test s often are indicative of something breaking but if the tests aren’t well-documented, Claude is more likely to make the test pass even if the change removes the value of the test. When tests fail, Claude tends to take the simplest path to green. Weakening assertions, patching over the real issue, whatever gets there fastest. A failing test is a signal for you to inspect, not just a problem for the model to solve. Type safety as guardrails TypeScript strict mode, Pydantic, Zod schemas. Constraining the model’s output so problems surface at compile time, not runtime. Probably obvious to most people here but worth stating because it mades a noticeable difference in how predictable the output is. Atomic commits, manual review before merge Small focused commits across parallel workstreams. Stage the PR and read the diff in GitHub with fresh eyes every time. Run the code-simplifier skill, use /code-review, you’d be surprised how much cleaner the output is after these two steps. The whole system compounds. No single piece is revolutionary but stacked together it’s a genuinely different experience than the prompt-and-pray loop. What’s working for you all? submitted by /u/energee

Originally posted by u/energee on r/ClaudeCode