I gave claude code a large fuzzy matching task ( https://everyrow.io/docs/case-studies/match-clinical-trials-to-papers ) and claude independently designed a TF-IDF pre-filtering step, spun up 8 parallel subagents, and used regex for direct ID matching. But it used exactly 8 subagents whether the dataset was 200 or 700 rows on the right side, leading to the natural consequence of how coding agents plan: they estimate a reasonable level of parallelism and stick with it. Even as the dataset grows, each agent’s workload increases but the total compute stays constant. I tried prompting it to use more subagents and it still capped at 8. Ended up solving it with an MCP tool that scales agent count dynamically, but curious if anyone’s found a prompting approach that works. submitted by /u/ddp26
Originally posted by u/ddp26 on r/ClaudeCode
