TL:DR: The Tamagotchi pet is cute. The real story is that Claude Code is being rebuilt as a speculative execution engine, Mythos is the model that makes the predictions accurate enough to be useful, and the measurement infrastructure to calibrate all of it is the one thing in half a million lines of code that Anthropic actually took steps to hide. The pet is the distraction. The architecture is the product.
Everyone’s talking about the Tamagotchi pet or focused on BUDDY, KAIROS, Undercover Mode, the Capybara model names. I cloned the repo and read the actual TypeScript instead of other people’s summaries and I think all of that is a distraction from something much bigger. I think the Claude Code source tells us what Mythos actually is - not just a bigger model, but the reason the infrastructure exists to use it. Five days before the full source dropped, someone reverse-engineering the CC binary found a system called Speculation. It’s gated behind tengu_speculation and hardcoded off in public builds. What it does After Claude finishes responding to you, it predicts what you’re going to type next, forks a background API call, and starts executing that predicted prompt before you hit Enter. When that speculation completes, it immediately generates the next prediction and starts executing that too. Predict, execute, predict, execute. It tries to stay 2-3 steps ahead of you at all times. It runs in a filesystem overlay so speculative file edits don’t touch your real code until you accept. It has boundary detection that pauses at bash commands, file edits needing permission, denied tools. It tracks acceptance rates, time saved, whether predictions chain successfully. This is branch prediction applied to coding agents. Speculatively execute the predicted path, keep results if right, discard if wrong.
Nobody in today’s conversation is connecting this to the source dump and it is the single most important thing in the entire codebase. Now here’s where it gets interesting. Every other unreleased feature in this repo - KAIROS, BUDDY, Coordinator Mode, ULTRAPLAN, Undercover Mode - shipped its actual implementation behind compile-time feature flags. The code is right there, just gated behind checks that Bun strips from public builds. But there’s one directory called moreright/ that’s different. It’s the only thing in 512K lines of code that uses a completely separate stub-and-overlay architecture. The external build has a no-op shell. The real implementation lives in Anthropic’s internal repo and gets swapped in during internal builds. The comment literally says “Stub for external builds - the real hook is internal only.” They didn’t just feature-gate this one. They made sure the implementation never touches the public codebase at all. The stub reveals the interface though. It’s a React hook called useMoreRight that fires before every API call, fires after every turn completion, can block queries from executing, gets full write access to the conversation history and input box, and renders custom JSX into the terminal UI. It only activates for Anthropic employees with a specific env var set. This is their internal experimentation and measurement framework. The thing they use to instrument features like Speculation before anyone else sees them. Think about what these two systems do together. Speculation predicts what you’ll type and pre-executes it. moreright sits on every query boundary and can compare what you actually typed against what Speculation predicted. It can compare speculative output against real execution output. It can render internal dashboards showing prediction accuracy in real time. Every Anthropic employee running CC with moreright enabled is generating training signal for the speculation system. Predictions go out, measurements come back, predictions improve. Their own employees are the training set for their own tool’s predictive capability. And the overlay architecture means the measurement code never ships externally. Nobody can see what they’re collecting or how they’re using it. This is the one thing they actually bothered to hide. There’s a third piece. /advisor. /advisor opus lets you set a secondary model that watches over the primary model. The advisor-tool-2026-03-01 beta header confirms active development. Run Sonnet as your main loop because it’s cheap and fast, have Opus act as a quality gate because it’s expensive and smart. Now connect this to Speculation. Speculate with the fast model, validate with the smart model, show the user something that’s both fast and correct. Three systems forming a single pipeline: Speculation generates candidates, Advisor validates them, moreright measures everything. Now here’s the Mythos connection. Last week’s CMS exposure told us Capybara/Mythos is a new tier above Opus, “dramatically higher” scores on coding, reasoning, and cybersecurity benchmarks. The draft blog said it’s expensive to run and not ready for general release. The CC source already has capybara, capybara-fast, and capybara-fast[1m] model strings baked in, plus migration functions like migrateFennecToOpus and migrateSonnet45ToSonnet46. The model-switching infrastructure is already built and waiting. Everyone is thinking about Mythos as “a bigger smarter model you’ll talk to.” I think that’s wrong. I think Mythos is the model that makes Speculation actually work. Better model means better predictions means more aggressive speculation means the agent is further ahead of you at all times. The speculation architecture isn’t a feature bolted onto Claude Code. It’s the delivery mechanism. Mythos doesn’t need to be cheap enough to run as your primary model if it’s running speculatively in the background, validated by an advisor, with results pre-staged in a filesystem overlay waiting for you to catch up. The “expensive to run” problem goes away when you’re only running it on predicted paths that have a high probability of being accepted, and falling back to cheaper models for everything else. The draft blog said they’re rolling out to cybersecurity defenders first, “giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.” A speculative execution engine powered by a model that’s “far ahead of any other AI model in cyber capabilities” doesn’t just find vulnerabilities when you ask it to. It finds them while you’re still typing your next question. It’s already three steps into the exploit chain before you’ve finished describing the attack surface. That’s an autonomous security researcher that happens to have a text box attached to it - not a chat bot. submitted by /u/denoflore_ai_guy
Originally posted by u/denoflore_ai_guy on r/ClaudeCode
