Hi, I’ve been trying to run Claude Code with Ollama and local open source models for coding purposes. I’ve tried using qwen-2.5-coder:7b , llama3.1:8b , gpt-oss:20b and qwen3:8b . For each of this model, I ran ollama laucnh claude --model <name> and asked the LLM to describe the code base I invoked it in. They all failed to make us of tools. I could not find resources about this. Are these models too small for tool calling? submitted by /u/matthieukhl
Originally posted by u/matthieukhl on r/ClaudeCode
You must log in or # to comment.
