Used Chatgpt and Claude for coding past year and fine models but bills got expensive, around $80 monthly. Bigger issue is each new US model version feels incremental, like iphone releases where numbers(or design) change but real difference minimal The thing is when Chinese models drop new versions the improvements actually feel substantial. US companies announce new models but day to day coding difference barely noticeable. Why does Deepseek or ZAI releasing new version seem to bring actual capability jumps while gpt-4 to gpt-5 or claude opus updates feel like spec bumps Not sponsored just been coding 6 years and tested GLM 5 for two weeks to see if this pattern holds What stood out: Gave it backend project, it planned whole architecture first. database structure, caching, error handling. didnt just write code, understood what im building Debug loops read logs and iterate until stable instead of throwing solutions hoping one works Multi file refactoring across 10+ files tracked dependencies without losing context Gap smaller than expected for backend work. Explanations less polished than Claude but implementation competitive Cost around $15 monthly vs $80+ on Claude for similar usage Splitting workflow now. Claude for architecture, GLM for implementation and about 60/40 Curious, are chinese models actually making bigger leaps per release or does it just feel that way because US models plateauing? submitted by /u/BlueDolphinCute
Originally posted by u/BlueDolphinCute on r/ArtificialInteligence
