I am on the $200/month Claude Max plan, and I think my account’s limits were reduced significantly in the last month. This is not based on “it feels worse”. I have local session data from Claude Code, and I can reproduce the calculation from the JSONL logs. I am also not claiming this is final proof of a global change for every Max user. What I am saying is narrower: my account data shows a very large change one part of that change appears exactly at a weekly reset boundary the result is large enough that I think other Max users should check their own logs What data I used Claude Code stores session transcripts as local .jsonl files. In my case the corpus is: 1,919 JSONL session files 30,156 deduplicated assistant messages with usage date range for assistant usage history: 2026-02-08 through 2026-04-11 Each JSONL file is an event stream. The assistant messages include usage fields, which let me reconstruct actual token burn from the transcript itself. Separately, I added a small statusline script that writes one extra JSON record into the same session JSONL whenever the Claude Code statusline updates. I call those records rl-snapshot . Each rl-snapshot contains: ts : timestamp sid : session id rl5 : rate_limits.five_hour.used_percentage rl7 : rate_limits.seven_day.used_percentage eff_k : my local effective-token estimate at that moment The important part is that rl5 and rl7 are not my guess. Those percentages are passed into the Claude Code statusline and originate from Anthropic’s backend. For these rate-limit snapshots I currently have: 1,593 total snapshots snapshot date range: 2026-04-08 20:57 UTC through 2026-04-11 01:36 UTC high-precision subset used for weekly cap estimation: 1,539 snapshots Why only a subset? Because when rl7 is under 10%, the backend percentage is only integer-rounded, so the noise becomes too high. For cap estimation I mainly use the snapshots where rl7 >= 10% . How I calculated the weekly cap For each snapshot, I compute how many tokens had already been burned in the current Friday-to-Friday billing week, then compare that burn to the backend’s own seven_day.used_percentage . The core formula is: weekly_cap = current_week_burn / (rl7 / 100) I calculate burn from the actual usage records in the transcript, not from manual estimates. I track several units, but the most useful one for comparison has been my “effective tokens” metric: effective = input * 1.0 + output * 5.0 + cache_creation_5m * 1.25 + cache_creation_1h * 2.0 + cache_read * 0.1 These weights are not arbitrary. They are just API-price normalization. I treat input token cost as the base unit = 1.0 , then convert output, cache writes, and cache reads into the same unit using their relative API pricing. So “effective tokens” here really means something like input-cost-equivalent tokens . For example, if output tokens cost 5x as much as input tokens, then output gets weight 5.0 . Same idea for cache creation and cache read. So this metric is meant to approximate how much paid API usage a session consumed, expressed in input-token-cost units. That matters because it is trying to measure the budget in cost-equivalent terms, not just raw token count. Raw tokens are also included separately in the analysis, but the normalized metric is often more informative because different token types are not priced the same. Why I think the April 10 reset is the strongest evidence The cleanest signal is not the March data. The cleanest signal is the weekly reset on Friday, 2026-04-10 at 17:00 UTC . Before that reset, I had 1,079 snapshots from the previous week where the implied weekly cap clusters around: ~740M raw tokens ~135M effective tokens After that reset, in the first ~8 hours of the new week, I had 460 snapshots . Those are noisier because they are at lower percentages, but the higher-confidence later points imply roughly: ~80M raw tokens ~22M effective tokens So the weekly cap on my account appears to move from about 135M effective to about 22M effective right across the reset boundary. That is roughly a 6x reduction . What makes this interesting is the shape of the change: same account same Max plan same Claude Code CLI same general workflow same day no gradual decline just a normal weekly reset boundary, then a much smaller inferred cap That is why I think this part is real signal and not just vibes. Why I think the 5-hour limit was also much higher in March I started writing the rl-snapshot statusline records only on April 8, so I do not have backend percentage snapshots for March. But I do have the full assistant usage history going back to February 8, and one thing stands out: on 2026-03-08 09:42 UTC , my largest rolling 5-hour window reaches 35.91M effective tokens immediately after that, there is a 262-minute gap with zero assistant messages then activity resumes That pattern looks exactly like hitting a 5-hour rolling limit and waiting for the window to clear. This does not prove the March 5-hour cap was exactly 35.91M. What it does show is that the cap was at least that high, because I actually consumed that much in a 5-hour window before getting cut off. Then compare that to a current measurement from 2026-04-11 01:29 UTC : backend rl5 = 96% 5-hour burn at that point = 4.57M effective tokens implied 5-hour cap = about 4.76M effective tokens So on the 5-hour axis, the data looks like: March 8: at least 35.91M effective April 11: about 4.76M effective That is at least a 7.5x reduction on the 5-hour axis. Where the “~10x” number comes from The ~10x claim is mainly about the weekly axis , not the 5-hour axis. Here is the chain: In mid-to-late March, my weekly burn repeatedly reached about 120M effective tokens . At that time I was not obviously running into the weekly ceiling, and my recollection is that I was nowhere near 100%. If a week with roughly 120M effective burn was still only around half of the weekly budget, that implies a March weekly cap on the order of ~240M effective or higher . Then, during the Apr 3 to Apr 10 week, I finally have backend rl7 snapshots and can measure the cap much more directly. That cluster comes out around ~135M effective . Then exactly at the Apr 10 17:00 UTC reset, the implied weekly cap drops again to about ~22M effective . So the picture is not: March directly measured at exactly X then April directly measured at exactly Y The picture is: March weekly level: roughly ~240M+ effective based on actual weekly burn that did not appear to exhaust the weekly budget Apr 3-10 weekly level: ~135M effective from backend-percentage-based measurement Apr 10+ weekly level: ~22M effective from backend-percentage-based measurement If you compare ~240M+ in March to ~22M now, you get roughly 10-11x cumulative reduction on the weekly axis . That March weekly number is an inference, not a hard snapshot-derived measurement, and I want to be explicit about that. But it is not coming from nowhere either. It is based on repeated observed weekly burn totals from the session corpus: 2026-03-06 to 2026-03-13: 120.83M effective 2026-03-20 to 2026-03-27: 121.59M effective 2026-03-27 to 2026-04-03: 115.19M effective Then compare that with the directly measured week: 2026-04-03 to 2026-04-10: implied weekly cap about 135M effective And then the post-reset week: 2026-04-10 onward: implied weekly cap about 22M effective So when I say “about 10x”, I do not mean I already have one perfect March backend snapshot that says exactly 240M . I mean: March usage patterns suggest a weekly budget on the order of ~240M+ early April directly measures at about 135M after the Apr 10 reset it directly measures around 22M That is why I think there may have been two separate reductions : an earlier March to early-April reduction then a second much sharper reduction exactly at the Apr 10 weekly reset What I think is strong vs what is still provisional Strong: the April 10 weekly reset looks like a hard step change in the inferred weekly cap the backend rate-limit percentages are being recorded directly into my session logs the burn values come from the transcript usage records themselves More provisional: the exact timing and size of the earlier March-to-early-April reduction whether the change is account-specific, cohort-specific, or global the exact previous weekly cap in March, because for that period I have burn history but not the same backend percentage snapshots Why I am posting this now I am continuing to collect data, and I expect the next few days of logs to make the picture clearer. But the April 10 step change is sharp enough that I think it is worth posting now, because if other Max users have similar JSONL histories, this should be easy to compare. My question Has anyone else on Claude Max seen the same thing? If you have local Claude Code session logs, especially if you capture the statusline rate-limit fields, I would really like to compare: 5-hour implied cap 7-day implied cap before vs after the 2026-04-10 17:00 UTC weekly reset If Anthropic changed Max limits intentionally, I would like to know what changed and where it was announced. If I am misreading the data, I am happy to be corrected, but please engage with the methodology rather than just saying “limits are dynamic”. please also support on X/twitter https://x.com/artem_mukhin_dx/status/2042798725731340658 submitted by /u/xeviltimx
Originally posted by u/xeviltimx on r/ClaudeCode

That is some wall you got there.