Original Reddit post

I’m the same person who posted the original PSA about two cache bugs this week. Since then I continued digging - total of 6 days (since 26th of march), MITM proxy, Ghidra, LD_PRELOAD hooks, custom ptrace debuggers, 5,353 captured API requests, 12 npm versions compared, leaked TypeScript source verified. The full writeup is on Medium (link in the comments). The best thing that came out of the original posts wasn’t my findings — it was that people started investigating on their own. The early discovery that pinning to 2.1.68 avoids the cch=00000 sentinel and the resume regression meant everyone could safely experiment on older versions without burning their quota. Community patches from VictorSun92, lixiangwuxian, whiletrue0x, RebelSyntax, FlorianBruniaux and others followed fast in relevant github issues. Here’s the summary of everything found so far. The bugs

  1. Resume cache regression (since v2.1.69, UNFIXED in 2.1.89) When you resume a session, system-reminder blocks (deferred tools list, MCP instructions, skills) get relocated from messages[0] to messages[N] . Fresh session: msgs[0] = 13.4KB. Resume: msgs[0] = 352B. Cache prefix breaks. One-time cost ~$0.15 per resume, but for –print --resume bots every call is a resume. GitHub issue #34629 was closed as “COMPLETED” on April 1. I tested on 2.1.89 the same day — bug still present. Same msgs[0] mismatch, same cache miss.
  2. Dynamic tool descriptions (v2.1.36–2.1.87, FIXED in 2.1.89) Tool descriptions were rebuilt every request. WebSearch embeds “The current month is April 2026” — changes monthly. AgentTool embedded a dynamic agent list that Anthropic’s own comment says caused “~10.2% of fleet cache_creation tokens.” Fixed in 2.1.89 via toolSchemaCache (I initially reported it as missing because I searched for the literal string in minified code — minification renames everything, lesson learned).
  3. Fire-and-forget token doubler (DEFAULT ON) extractMemories runs after every turn, sending your FULL conversation to Opus as a separate API call with different tools — meaning a separate cache chain. 20-turn session at 650K context = ~26M tokens instead of ~13M. The cost doubles and this is the default. Disable: /config set autoMemoryEnabled false
  4. Native binary sentinel replacement The standalone claude binary (228MB ELF) has ~100 lines of Zig injected into the HTTP header builder that replaces cch=00000 in the request body with a hash. Doesn’t affect cache directly (billing header has cacheScope: null ), but if the sentinel leaks into your messages (by reading source files, discussing billing), the wrong occurrence gets replaced. Only affects standalone binary — npx/bun are clean. Where the real problem probably is After eliminating every client-side vector I could find (114 confirmed findings, 6 dead ends), the honest conclusion: I didn’t find what causes sustained cache drain. The resume bug is one-time. Tool descriptions are fixed in 2.1.89. The token doubler is disableable. Community reports describe cache_read flatlined at ~11K for turn after turn with no recovery. I observed a cache population race condition when spawning 4 parallel agents — 1 out of 4 got a partial cache miss. Anthropic’s own code comments say “~90% of breaks when all client-side flags false + gap < TTL = server-side routing/eviction.” My hypothesis: each session generates up to 4 concurrent cache chains per turn (main + extractMemories + findRelevantMemories + promptSuggestion). During peak hours the server can’t maintain all of them. Disabling auto-memory reduces chained requests. What to do json // ~/.claude/settings.json { “env”: { “ENABLE_TOOL_SEARCH”: “false” } } /config set autoMemoryEnabled false Bots/CI : pin to 2.1.68 (no resume regression) Interactive : use 2.1.89 (tool schema cache) Don’t mix –print and interactive on same session ID Bonus: the swear words Kolkov’s article described “regex-based sentiment detection” with a profanity word list. I traced it to the source. It’s a blocklist of 30 words ( fuck , shit , cunt , etc.) in channelPermissions.ts used to filter randomly generated 5-letter IDs for permission prompts. If the random ID generator produces fuckm , it re-hashes with a salt. The code comment: “5 random letters can spell things… covers the send-to-your-boss-by-accident tier.” NOT sentiment detection. Just making sure your permission prompt doesn’t accidentally say fuckm . There IS actual frustration detection ( useFrustrationDetection ) but it’s gated behind process.env.USER_TYPE === ‘ant’ — dead code in external builds. And there’s a keyword telemetry regex ( /\b(wtf|shit|horrible|awful)\b/ ) that fires a logEvent — pure analytics, zero impact on behavior or cache. Also found KAIROS : unreleased autonomous daemon mode with /dream , /loop , cron scheduling, GitHub webhooks Buddy system : collectible companions with rarities (common → legendary), species (duck, penguin), hats, 514 lines of ASCII sprites Undercover mode : instructions to never mention internal codenames (Capybara, Tengu) when contributing to external repos. “NO force-OFF” Anti-distillation : fake tool injection to poison MITM training data captures Autocompact death spiral : 1,279 sessions with 50+ consecutive failures, “wasting ~250K API calls/day globally” (from code comment) Deep links : claude-cli:// protocol handler with homoglyph warnings and command injection prevention Full article with all sources, methodology, and 19 chapters of detail in medium article. Research by me. Co-written with Claude, obviously. PS. My research is done. If you want, feel free to continue. submitted by /u/skibidi-toaleta-2137

Originally posted by u/skibidi-toaleta-2137 on r/ClaudeCode