Seriously, usage limit updates (for Claude and Codex) are a separate thing and were (unfortunately) kind of expected. Mitigation is easy - but still painful: using API or using more subscriptions.
But what hits me harder is the noticeably degraded quality for both Opus 4.6 (1m as well as 200k “flavor”) and GPT-5.4.
Evidence is pretty obvious: I have a skill for PR reviews and until last week I never had any issues with that.
I think it’s pretty clear:
Use the repo’s local helper as the primary tool for all PR comment collection — initial triage, mid-loop checks, and post-push polling alike. For this repo: bash tsx scripts/pr-comments.ts --new <pr-url> This script already aggregates inline review comments, PR review bodies, and thread metadata into a single deduped output. Do NOT bypass it with raw gh api calls unless debugging the script itself or fetching data it does not cover. Important: - valuable nitpicks and outside-diff findings may exist only in PR review bodies — the local helper should cover these; verify if unsure - prefer merging sources into one deduped task list instead of trusting a single snapshot or endpoint - reduce noise deliberately: exclude walkthrough blobs, trigger acknowledgements, resolved/outdated threads by default, and “Addressed in commit …” comments unless you are auditing stale review state
And, to be honest, even if the instructions could be optimized, optimization was not required until last week. It just worked. But recently, both LLMs started “bypassing” the script by using the gh cli.
That’s just one example, and this one - IMHO - is “deterministic”. It feels like OAI and Ant agreed on “Yeah, let’s lower usage limits and limit our compute resources / reasoning effort”. And it’s REALLY frustrating, even scary, because right now it feels like even switching to API wouldn’t resolve the issue of bad quality.
I can only hope that this is the “usual cycle” as we’ve usually seen before new generations were dropped
submitted by
/u/Firm_Meeting6350
Originally posted by u/Firm_Meeting6350 on r/ClaudeCode
