Original Reddit post

So there’s a new open‑weight coding model called OpenClaw that dropped this week and it’s already making waves. The Washington Post reported that developers were lining up at Baidu’s Beijing HQ to get it installed – apparently it’s that good. I pulled together a comparison with Claude Opus 4.5 and GPT‑5.2 based on the latest benchmarks. The numbers are pretty surprising: Key specs: 1.2T total parameters, 28B active (MoE) 1M token context window (Crustacean Attention) Built‑in “claw‑swarm”: up to 128 parallel coding agents Apache 2.0 license – completely free API pricing: $0.50/1M input tokens via Fireworks/Together I haven’t seen this much hype around an open‑source coding model since DeepSeek. Has anyone here actually tried running it locally? The quantized versions are hefty (~250GB for 1.8‑bit) but apparently doable on a single 24GB GPU with offloading. What are your thoughts? Is this the first open‑source model that truly beats closed‑source for coding, or are the benchmarks overhyped? submitted by /u/Remarkable-Dark2840

Originally posted by u/Remarkable-Dark2840 on r/ArtificialInteligence