Original Reddit post

At this point, I think it’s obvious that Claude Code users are not all receiving the same model quality/output quality, despite paying the same subscription price. I’m not talking about rate limits or session length. I’m talking about the actual intelligence and usefulness of the responses. Some users seem to get highly competent outputs that solve problems correctly on the first attempt, while others are getting noticeably worse results that require multiple follow-ups, corrections, and retries just to produce working code. When two customers are paying the same amount but effectively receiving different levels of model performance, that’s a legitimate concern. If Anthropic is dynamically routing users to different levels of inference quality, compute allocation, context processing, or internal prioritization systems, there should be transparency around that. Right now it feels like some users are effectively getting a “smarter” Claude than others while paying the exact same price. There is certainly a behind the scenes grouping going on at Anthropic, where they are choosing users they deem superior to get the better models while throttling others. What can be done about this to put a stop to it? I’d imagine that it’s somewhat unlawful. submitted by /u/LiquidVolatility

Originally posted by u/LiquidVolatility on r/ClaudeCode