Does anyone have a suggestion for lowering the impact of tasks that require a large amount of web searches? I was just building a scraper suite for all the local representatives here in my state, and it blew through my 5hr usage cap in about 45 min (on Max 5x plan). I’ve also noticed this happening when I run topic research tasks. Even with python scripts doing the actual fetching and cleaning, and Haiku subagents doing the initial classifying before feeding it to Sonnet/Opus for analysis, it uses up SO much context. Is there a better way to do this that doesn’t demolish my usage cap? submitted by /u/TimSimpson
Originally posted by u/TimSimpson on r/ClaudeCode
You must log in or # to comment.
