Original Reddit post

I need to get this off my chest. On February 12, 2026 , Anthropic quietly added a beta header called redact-thinking-2026-02-12 to Claude Code. What this does is suppress all visible reasoning in the terminal: the chain-of-thought, the exploratory paths, the backtracking, the “wait, that approach won’t work” moments. All of it, gone. Hidden in the backend. You just get the output. And I get it. Anthropic’s stated reasons are latency and UX clarity . Apparently seeing the raw reasoning stream “misleads” users into thinking the model is going down a dead end, when the discarded branch is just how extended thinking works. Okay, fair point technically. But here’s the thing: I’m an engineer. I want to see the dead ends. That’s how you learn to think about problems better. The part that stings? Claude absolutely has extended thinking. You can enable it via the API, set a budget_tokens parameter, and watch the model reason step-by-step on Claude.ai. You can even use it on Amazon Bedrock and get varying levels of transparency into the thought process. But in Claude Code, the tool literally designed for developers, it’s all redacted by default. Those thinking tokens are still being generated and billed as output tokens , you just don’t get to see them. You’re literally paying for reasoning you can’t learn from.platform.claude+2 Compare that to when I was using GitHub Copilot . Yes yes, I know the memes, but the tooling felt transparent . You could follow context, understand suggestions, build intuition over time. I was pair programming. With Claude Code I’m rubber-stamping outputs from an oracle. Great for shipping, terrible for growing as an engineer. So two genuine questions for the community:

  1. Are there alternatives that actually show the reasoning process? Based on my digging, here’s the honest landscape: morphllm+1 Cline with a local DeepSeek or Qwen model via Ollama is looking increasingly attractive for anyone who actually wants to watch an AI reason through their codebase.
  2. Does Codex CLI show thinking tokens? Short answer: Sort of, but it’s already heading in the same direction. Codex CLI uses OpenAI reasoning models (o3, o4-mini) and did show chain-of-thought in early demos. But there’s already a GitHub issue open titled “Option to show CoT instead of hiding by default” . And here’s the kicker: a Reddit user on r/LocalLLaMA literally caught GPT-5.5 leaking its chain of thought inside Codex recently because the redaction wasn’t applied. So the “real” CoT exists, it’s just being processed through a summarization model and stripped before you see it. You’re not even getting the full thing. Codex reasoning tokens are also stored as encrypted items in session logs, completely unreadable after the fact. And yes, also billed as output tokens you don’t get to see. So we’ve got two of the biggest AI coding tools both quietly moving toward hiding their reasoning from developers for UX and latency reasons. Both charging you for the thinking anyway. Someone has to say it: this is bad for developer education and it’s a deliberate product choice, not a technical limitation. Would love to hear from anyone who’s switched to Aider or Cline specifically because of this. Did the transparency actually help your learning? And if anyone’s figured out how to restore visible thinking in Claude Code (the hypogray article hints there are settings for it), please drop the config below. TL;DR: Anthropic added a header in Feb 2026 to hide Claude Code’s thinking tokens. You’re paying for reasoning you can’t see. Codex is doing the same thing. Cline + local models looks like the only path to actual transparency right now. submitted by /u/Shubham_Garg123

Originally posted by u/Shubham_Garg123 on r/ClaudeCode