I have a game codebase written in Golang, about 350k SLOC, a little over half of that is compiled to WebAssembly and run as a modern web app (WebGPU or WebGL, WebTransport, WebAudio, etc.) I branched out to try Augment, Cursor (on Auto), and Codex with gpt-5.5 over several weeks. I gave each tool a good, honest shake. None of them came close to Opus 4.7 in Claude Code. What I notice about Opus is that it: Works harder to understand my code and identify all the relevant places where it might need to do updates. Seems to understand the little nuances in my prompts better. As a result of the above, it has a significantly higher accuracy in properly implementing what I requested on the first try (or second try). By contrast, other models seem focused on telling me they’re done as quickly as possible, but they end up “cartel coding” - adding a few lines here and there and calling it done. In a client/server program where state has to be marshaled across the wire and kept in sync between client and server, many feature changes require changes at numerous layers of the stack. Opus 4.7 seems to understand that implicitly more than any other model - and it usually “does the right thing.” The runner up is gpt-5.5 in Codex, which is good if I’m very explicit about what I want, but I have to like, tell it what to do on the client, tell it what to do on the server, and be almost annoyingly detailed, to the point where I could have written it myself. Not Opus. Most of the time, I just tell it what I want from a user-facing perspective, and it automatically changes the client, server, network proto, renderer, etc. all in one go. I tried to quit Claude Code and failed because it’s simply the best at understanding and working on large codebases. This is my experience. /me cancels my cancellation of Claude Max 20x submitted by /u/allquixotic
Originally posted by u/allquixotic on r/ClaudeCode
