Original Reddit post

GLM-5 landed today - 745B params, 200K context, approaching Claude Opus on coding benchmarks, and absurdly cheap (~$0.11/M tokens vs $5/M for Opus). If you use Clother, you can switch to it right now: clother config zai Then just launch: clother zai That’s it. Claude Code runs as usual, same TUI, same flags (–dangerously, --continue), but requests hit GLM-5 through Z.ai’s Anthropic-compatible endpoint. No manual env hacking, no settings.json edits. One command to set up your Z.ai API key, one command to launch. I’ve been using GLM models as a fallback when I hit my Claude Pro limits for a while now. GLM-4.7 was already solid for everyday coding tasks. First impressions on GLM-5: noticeably better at multi-step agentic stuff and long context coherence. Still early though, will update as I test more Repo: https://github.com/jolehuit/clother submitted by /u/yossa8

Originally posted by u/yossa8 on r/ClaudeCode