Question for the founders and long time devs here, not a rant. Productivity gains from agentic use make sense for real businesses with real budgets and real headcount. The ROI is obvious there. For the rest of us building solo or pre-revenue, the math gets murkier and I’m trying to think it through for myself. Subscriptions are a good deal, no complaints for my use case. Token spend bundled into a flat sub can be way cheaper than third party options and that lowers the barrier of entry a lot to build a product. But here’s where it gets weird for me. Limits. I do use the API, but I keep it tight, a handful of -p automations that run as crons and spit out summaries for me to execute manually. Stuff like a daily digest of overnight errors, a weekly dependency review I read and action myself. I’m not handing the agents full control. That’s a deliberate call to dodge runaway costs. But sub limits break automations halfway through the month, and an automation that quietly dies on the 15th isn’t really an automation. So I’m stuck between keeping API use deliberately small and dealing with unpredictable sub throttling. (And I know about the SDK change coming in June that makes it even murkierer, imo) The usual answer is just host your own local model. Yeah, easy when you have the hardware. But ~$10k to full-ass it is a hard sell for a pre-revenue startup, and you can still lose intelligence on harder tasks. ROI doesn’t math right for me yet. So my actual question, is the agentic push built around our ROI, or around quicker revenue and adoption numbers to keep funding the compute buildouts? Not saying those cancel out, but they aren’t the same thing either, and the difference shows up in stuff like where the limits get drawn. Curious how everyone else is handling the limits vs API tradeoff for long running agentic stuff. What’s actually working for you? submitted by /u/Jmaack23
Originally posted by u/Jmaack23 on r/ClaudeCode
