Original Reddit post

I’m a big believer in building things you want to use, and there’s hardly anything more fun to me every year than when I get to create my March Madness bracket. I’ve used many different tools and methodologies over the years, some more successful than others. But nothing really gave me everything I wanted very easily, and I spend way too much time poring over stats from everywhere trying to get a leg up. So over the past week, I built The Bracket Lab ( GitHub repo ). It’s a Monte Carlo bracket simulation app with a Next.js + Typescript and Supabase (Postgres + auth) stack deployed on Render. The simulation engine, data pipeline, and UI were built collaboratively with Claude Code. This is a domain I know very well, which definitely helped me. I am a very average developer without AI help (learned Python about 15 years ago and have used it quite a bit, but just picked up Next.js a couple of years ago and use it sporadically), but the domain knowledge I have in sports analytics is pretty high. That’s the first thing I’d always recommend - know the world you’re building for before you build. That’s more important than coding knowledge at this point, in my opinion, because you need to know how to steer it. Some of the things that Claude Code handled well:

Simulation engine architecture — the 10-step matchup probability pipeline (composite ratings → lever adjustments → variance modifiers → win probability), Monte Carlo simulator, bracket tree builder. Claude was great at maintaining the mathematical invariants across iterations.

Data pipeline — CSV normalizers for three different rating systems (KenPom, Torvik, Evan Miya) with fuzzy team name matching, upsert logic, and schema validation. Each source has different conventions and Claude handled the edge cases well.

Catching each other’s mistakes — the most valuable moments were when Claude and I would debug engine bugs together. For example, we discovered the Four Factors formula was fundamentally inverted (cross-team comparison rewarded teams more when their opponent had better defense). Working through the math collaboratively led to a much better same-team net quality approach.

Refactoring at scale — CSS Modules migration across 30+ components, lever system redesign, ownership model overhaul — Claude handled these confidently with minimal breakage. Areas where I had to steer quite a bit:

Domain modeling decisions — things like “Evan Miya’s BPR is additive (OE + DE), not differential like KenPom” required my basketball analytics knowledge. Claude would have happily treated all three sources the same way without that correction.

UX philosophy — the design direction, the decision to split levers into backtested vs. supplemental tiers, contest pool size strategy, etc all needed my understanding of what would be useful to someone like me to implement

Staying focused — Claude will happily build whatever you ask for. Having a clear spec (CLAUDE.md) and backlog discipline (I used and had Claude constantly update a PROJECT_PLAN.md file) was essential to avoid scope creep. After the initial plan was drawn-up, as I had new ideas or minor bugs surfaced, I just had them added to the backlog and kept pushing through the initial plan before looking at the backlog. This is something I have learned over time with CC to keep myself from letting projects get away from me. The repo is public if anyone wants to look at the code or the CLAUDE.md that guided the project. Happy to answer questions about the workflow. submitted by /u/laoc00n

Originally posted by u/laoc00n on r/ClaudeCode