I prompted Claude Code to classify a dataset of 200 Hacker News “Who’s Hiring” posts by primary engineering role. Doing this in a single prompt doesn’t scale using Claude Code out of the box, but I was able to get around this by using a plugin that processed each row in parallel with a fixed schema. Here’s the prompt I used: Claude called everyrow_agent with a response schema ( category as an enum, reasoning as a string), submitted all 200 rows for parallel processing, then polled for progress, and successfully classified all 200. The full list cost $1.53 and took 59 seconds, but what made it useful beyond the cost/time vs. just prompting Claude directly was the structured output constraint. Every row got exactly one of the 9 enum values and there’s wasn’t any post-processing. Here’s the Python SDK code: everyrow.io/docs/classify-dataframe-rows-llm Before using the plugin, I tried chunking the CSV, having Claude write a loop, but neither cleanly handled the batch classification natively in Claude Code without burning through the context window. Would love input on what anyone else is using when CSVs are too big for a single prompt? submitted by /u/MathematicianBig2071
Originally posted by u/MathematicianBig2071 on r/ClaudeCode
