Original Reddit post

After almost a year since the ‘vibecoding’ became popular I have to admit that there are a few thoughts. Sorry if this is not well organized - it was a comment written somewhere I thought might be good to share (at least it’s not AI written - not sure if it’s good or bad for readability, but it is what it is). My honest (100% honest take) on this from the perspective of: corporate coder working 9-5 + solo founder for a few microsaas + small business owner (focused on webdevelopment of business websites / automations / microservices): You don’t need to spend 200$+ to be efficient with vibecoding . You can do as good or super close to frontier models with fraction of the price paid around for opensource as long as the input you provide is good enough - so instead of overpaying just invest some time into writing a proper plans and PRDs and just move on using glm / kimi / qwen / minimax (btw synthetic has all of them for a single price + will be available with no waitlist soon and the promo with reflinks is still up). If you’re professional or converting AI into making money around (or if you’re just comfortable with spending a lot of money on running codex / opus 24/7) then go for SOTA models - here the take doesn’t matter much (i prefer codex more because of how 5.3 smart is + how fast and efficient spark is + you basically have double quota as spark has separate quota than standard openAI models in codex cli / app). Have in mind tho that the weakest part of the whole flow is the human . Changing models to better ones would not help you improving the output if you don’t improve the input . And after spending thousands of hours reviewing what vibecoders do and try to sell - I must honestly admit that 90% is generally not that great. I get that people are not technical, but also it seems that they don’t want to learn, research and spend some time before the actual vibecoding to ensure output is great - and if the effort is not there, then no matter if you’ll use codex 6.9 super turbo smart or opus 4.15 mega ultrathink or minimax m2 - the output would still not go above mediocre at max. claude is overhyped for one, sole and only reason - majority of people wants to use best sota model 24/7 100% of their time while doing shit stuff around instead of properly delegating work to smaller / better / faster models around. okay, opus might be powerful, but the time it spends on thinking and amount of token it burns is insane (and let’s be real now - if the claude code subscription including opus would not exist - nobody will be using opus because how expensive it is via direct api access. Have in mind a few months ago the 20$ subscription included only sonnet and not opus). for me for complex, corporate driven work its a close tie between opus and codex (and tbh im amazed with codex 5.3 spark recently, as it allows me to tackle quite small or medium tasks with insane speed = the productivity is also insanely good with this). using either one as a SOTA model will get you far, very very far. But do you really need a big cannon to shoot down a tiny bird? Nope. Also - i’ll still say that for majority of vibecoders around in here or developers you don’t need a big sota model to deliver your website or tiny webapp. You’ll do just as fine with kimi / glm / minimax around for 95-99,9% of time doing the stuff, maybe you’ll invest a big more time into debugging of complex issues because as typical vibecoder has no tech experience they’ll lack the experience to properly explain the issue. Example: all models (really, all modern models released after glm4.7 / minimax m2.1 etc) can easily debug cloduflare workers issues as long as you provide them with wrangler logs (wrangler tail is the command). How many people does that? I’d bet < 10% (if ever). People try to push the fixes / move forward trying to forcefully push ai to do stuff instead of explaining it around. OFC frontier models will be better. Will they be measurably better for certain tasks such as webdevelopment? I don’t think so, as eg. both glm and kimi can develop better frontend from the same prompt than both codex, opus and sonnet when it comes to pure webdev / business site coding using svelte / astro / nextjs. Will frontier models be better at debugging? Usually yes, but also the difference is not huge and the lucky oneshots of opus fixing issues in 30 seconds while other models struggle happen for all models (codex can do the same, kimi can do the same - all depends on the issue and both prompt added into it + a bit of luck of LLM actually checking proper file in code rather than spinning around). submitted by /u/Bob5k

Originally posted by u/Bob5k on r/ClaudeCode