- cross-posted to:
- ai_reddit
- cross-posted to:
- ai_reddit
I’ve been running both the new Opus and GPT-5.3 against a fairly messy legacy repo I’m refactoring. The 1M context on Opus is actually holding up better than I expected for dependency tracking—it found a circular import I missed three times. GPT-5.3 seems faster at generating the boilerplate for the new modules, but it dropped context on the global types file twice yesterday. Opus is slower but feels more stable for the deep architectural stuff. Is anyone else seeing this divergence? Or am I just getting lucky with the Opus prompts? submitted by /u/HarrisonAIx
Originally posted by u/HarrisonAIx on r/ArtificialInteligence
You must log in or # to comment.
