Original Reddit post

Hey, I’m relatively new to software engineering (beyond some courses in Typescript, a couple quite-thorough college classes on data structures and algorithms in C, and googling about software design). I’ve been trying to make a relatively complex application with a friend, which has a very substantial backend (we probably should’ve narrowed the architecture for the initial use cases but we did not). My workflow has involved hand-making (with a lot of googling/Gemini help on best practices) a 20-page architecture overview of all the server backend + offline mirror functionality (including handwriting a lot of the core interfaces), and then making a big 20-feature spec-kit plan and a couple feature dependency/cross-seams documents. Now, I’m going through feature-by-feature where I have Claude Opus + Spec-kit make a spec (because it is way more readable than Codex), then GPT 5.4 does 20-30 rounds of error-catching+ I give manual comments + GPT5.4 patching on spec, and then Claude spec-kit makes task-list, checks it, and implements, and finally GPT5.4 audits the implementation a few times against the spec. Has anyone been doing a process like this before? It works decently and puts more work on GPT5.4 for debugging to preserve claude rate-limits, but it is also very slow as chatGPT often expands feature scopes or does not fully implement spec refactors in response to problems it finds, requiring several rounds of debugging per problem. Does anyone have suggestions? submitted by /u/Secure-Data-9883

Originally posted by u/Secure-Data-9883 on r/ClaudeCode