Right now, Claude Code is a brilliant mind trapped inside a fragile body: your local machine. You ask it to build something complex and suddenly it’s fighting your local Postgres config, hanging on npm install , burning context windows debugging dependency issues, or failing because it can’t spin up a proper mobile emulator or GUI environment. And when you want parallelism, testing multiple architectures, branching experiments, or handing work between agents, you’re stuck doing everything sequentially because your local environment isn’t forkable infrastructure. Most AI sandboxes don’t really solve this either. They’re constrained and not designed for generalized agent-controlled compute. I’m one of the builders behind Fleeks, and we’ve been working on an SDK that gives Claude Code its own actual computer. Instead of executing locally, Claude can provision cloud containers through the SDK and fully control them through API calls. A few things that ended up mattering way more than we expected: Docker-in-Docker support (almost nobody supports this properly) 25+ pre-warmed runtimes with sub-200ms startup GUI support via Xvfb/VNC for browsers, frontend previews, Flutter emulators, desktop apps, etc. Stateful workspace forking The forking part is the real unlock. If Claude wants to test 3 architectures, run RL workloads, or hand off work between agents, it can clone the exact workspace state, including files, processes, env vars, and running services, then execute branches in parallel without resetting the environment. That changes the model from: “AI generates code on your laptop” to: “AI operates its own persistent compute environment.” Feels like a much bigger shift than most people realize right now. There’s a free tier if anyone wants to experiment with the infrastructure. Happy to answer questions about the architecture or how we handle the Docker-in-Docker routing internally. submitted by /u/Consistent-Stock9034
Originally posted by u/Consistent-Stock9034 on r/ClaudeCode
