Original Reddit post

1 year ago I posted “12 lessons from 100% AI-generated code” that hit 1M+ views. Some of those points evolved into agents.md, claude.md, plan mode, and context7 MCP. This is the 2026 version, learned from shipping products to production. 1- The first few thousand lines determine everything When I start a new project, I obsess over getting the process, guidelines, and guardrails right from the start. Whenever something is being done for the first time, I make sure it’s done clean. Those early patterns are what the agent replicates across the next 100,000+ lines. Get it wrong early and the whole project turns to garbage. 2- Parallel agents, zero chaos I set up the process and guardrails so well that I unlock a superpower. Running multiple agents in parallel while everything stays on track. This is only possible because I nail point 1. 3- AI is a force multiplier in whatever direction you’re already going If your codebase is clean, AI makes it cleaner and faster. If it’s a mess, AI makes it messier faster. The temporary dopamine hit from shipping with AI agents makes you blind. You think you’re going fast, but zoom out and you actually go slower because of constant refactors from technical debt ignored early. 4- The 1-shot prompt test One of my signals for project health: when I want to do something, I should be able to do it in 1 shot. If I can’t, either the code is becoming a mess, I don’t understand some part of the system well enough to craft a good prompt, or the problem is too big to tackle all at once and needs breaking down. 5- Technical vs non-technical AI coding There’s a big difference between technical and non-technical people using AI to build production apps. Engineers who built projects before AI know what to watch out for and can detect when things go sideways. Non-technical people can’t. Architecture, system design, security, and infra decisions will bite them later. 6- AI didn’t speed up all steps equally Most people think AI accelerated every part of programming the same way. It didn’t. For example, choosing the right framework, dependencies, or database schema, the foundation everything else is built on, can’t be done by giving your agent a one-liner prompt. These decisions deserve more time than adding a feature. 7- Complex agent setups suck Fancy agents with multiple roles and a ton of .md files? Doesn’t work well in practice. Simplicity always wins. 8- Agent experience is a priority Treat the agent workflow itself as something worth investing in. Monitor how the agent is using your codebase. Optimize the process iteratively over time. 9- Own your prompts, own your workflow I don’t like to copy-paste some skill/command or install a plugin and use it as a black box. I always change and modify based on my workflow and things I notice while building. 10- Process alignment becomes critical in teams Doing this as part of a team is harder than doing it yourself. It becomes critical that all members follow the same process and share updates to the process together. 11- AI code is not optimized by default AI-generated code is not optimized for security, performance, or scalability by default. You have to explicitly ask for it and verify it yourself. 12- Check git diff for critical logic When you can’t afford to make a mistake or have hard-to-test apps with bigger test cycles, review the git diff. For example, the agent might use created_at as a fallback for birth_date. You won’t catch that with just testing if it works or not. 13- You don’t need an LLM call to calculate 1+1 It amazes me how people default to LLM calls when you can do it in a simple, free, and deterministic function. But then we’re not “AI-driven” right? EDIT: Your comments are great, they’re inspiring which points I’ll expand on next. I’ll be sharing more of these insights on X as I go. submitted by /u/helk1d

Originally posted by u/helk1d on r/ClaudeCode

  • theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    20 days ago

    Yeah, back in the real world, LLMs are not at all capable of 100% vibecoding. Unfortunately, the problems are fundamental to the nature of LLMs and cannot be solved by a “better” LLM. Newer models still have the same exact fatal flaws as older models.

    It has been over 3 years with the tech and we’re all still waiting on the AI-written software revolution with massive amounts of new, niche open source projects flooding the software landscape. It’s lack of existence is proof that the author is just LARPing. Yet for those 3 years, people like the author have been making ridiculous claims like this with absolutely nothing real to show to back them up.

    In fact, companies have been reporting that AI-use has caused a measurable decline in productivity, and LLM API usage stats have reflected this, as many companies have already begun to pull the plug. In the open-source space, LLMs have been a problem rather than revolutionary, hindering projects with bug-filled PRs and bogus vulnerability reports.

    The post is a fantasy and is the kind of delusional posing that belongs on LinkedIn.