Has anyone had success running Qwen or other local hosted LLMs comparable? What specs would you recommend? I’ve got mine fit in VRAM with some spillover but it doesn’t work worth a damn (openclaw) other than being a uselss chatbot in Ollama. I’ve tried Qwen 3.6. Gemma4, and minstral until I found out Minstral doesn’t support reasoning. submitted by /u/skelecorn666
Originally posted by u/skelecorn666 on r/ArtificialInteligence
You must log in or # to comment.
