Researchers disclosed serious Ollama vulnerabilities, including “Bleeding Llama,” a critical unauthenticated memory leak that can expose prompts, environment variables, API keys, and other sensitive data from AI inference servers. Separate Windows updater flaws may also allow persistent RCE through a malicious update chain. If you’re running Ollama for local or internal AI workflows, patch fast, avoid exposing port 11434 publicly, disable Windows auto-updates for now, and put authentication in front of any reachable instance submitted by /u/raptorhunter22
Originally posted by u/raptorhunter22 on r/ArtificialInteligence
You must log in or # to comment.

