Working on a complex front-end task, fed Claude ~8200 chars of DOM markup for analysis. Compaction fired, and the summary compressed it down to a one-line mention. Claude had no idea anything was missing and kept working with bad assumptions. The root cause: compaction summaries have no back-reference to the transcript they came from. Once it’s summarized, the original is gone forever — even though the full transcript still exists on disk. I filed a feature request proposing indexed transcript references in compaction summaries. Instead of losing context permanently, the summary would include pointers like [transcript:lines 847-1023] that Claude can read on demand. Zero standing token cost, surgical recovery only when needed, no MCP servers or embedding databases required. 12 thumbs-up on GitHub so far. If you’ve hit this problem, go upvote: https://github.com/anthropics/claude-code/issues/26771 Curious what workarounds others have found — or if you’ve just been eating the context loss. submitted by /u/Short-Sector4774
Originally posted by u/Short-Sector4774 on r/ClaudeCode
