Earlier on today I kept on hitting usage limits every 4 hours in spite of spreading usage across two Max 20x accounts… I had enough. Workaround : Run Claude Code on the frontend, using ChatGPT models on the backend. I opted for an older model but you can use a newer one if you want. Right now a $25/month account will get you about as much usage as two 20x accounts with Claude ($400/month). Steps : Purchase a ChatGPT account: https://chatgpt.com/pricing/ Install LiteLLM proxy server: pip install ‘litellm[proxy]’ Download docker-compose.yml , chatgpt_system_message_callback.py and litellm-config.yaml from https://gist.github.com/cowwoc/1cb95025e8a375166e6a34092951fdf6 Define the following environment variables before launching claude-code: ANTHROPIC_BASE_URL=http://host.docker.internal:4000/ ANTHROPIC_AUTH_TOKEN=sk-my-fake-litellm-key CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1
- chatgpt_system_message_callback.py is a workaround for this LiteLLM bug: https://github.com/BerriAI/litellm/issues/22968 ): The above configuration maps all Claude models to gpt-5.2-codex. Modify the configuration if you wish to use a different mapping. In my experience, GPT 5.2 is dumber than Sonnet 4.6 but I’m able to get about as much usage out of ChatGPT’s $25/month plan as 2x$200/month Claude plans. I can only imagine what a Pro account ($250/month) would feel like. You could probably use GPT-5.4 all day long without a problem. Give it a try. submitted by /u/cowwoc
Originally posted by u/cowwoc on r/ClaudeCode
