Original Reddit post

Hey I tried to run Local LLM with Ollama and integrate it with Claude code. I have used gemma4:26b which comparing to other models, did a good work so far. But my Computer start lagging < 32gb of Ram>. I m wondreding if someone had the similar experience, which model are you using and if there any recommandation for me on which model to use? submitted by /u/MaterialAppearance21

Originally posted by u/MaterialAppearance21 on r/ClaudeCode