Original Reddit post

I’m trying to get a reality check from other people building with Claude. I pulled my usage stats recently and the totals surprised me, so I’m curious how this compares to others who use Claude heavily for development, agents, or research workflows. All-time usage: Total tokens: ~9.3 billion Total cost: ~$6,859 Input tokens: ~513k Output tokens: ~3.39M+ Cache create: ~383M+ Cache read: ~8.9B+ By month: Feb 2026: 525M tokens — $312 Mar 2026: 8.77B tokens — $6,546 Models used: Mostly Claude Opus 4.6 , with some Sonnet 4.6 and Haiku 4.5 . A lot of this came from running multiple long-running projects and agent systems (coding agents, research pipelines, document analysis, trading experiments, etc.), which generated huge cache reads over time. I’m genuinely curious: Are there other individual users hitting multi-billion token usage like this? How common is it for a single user to burn ~$5k–$10k+ in Claude compute? Are there “power users” here running similar agent workflows? Would love to hear from people doing heavy Claude builds or large-scale experiments. Trying to figure out whether this is normal for advanced users or if I’ve wandered into “inference whale” territory. submitted by /u/OGMYT

Originally posted by u/OGMYT on r/ArtificialInteligence