The benchmark sheet dropped this morning and people are losing it in the ML community. What DeepSeek R2 scores: •MMLU: 90.8 (GPT-4o: 88.7) •HumanEval coding: 93.2 — new open-source SOTA •MATH reasoning: 88.9 •Runs on a single A100, fully local, zero API costs Hugging Face hit 300k downloads in the first 6 hours. The open-source community is already fine-tuning it for medical, legal, and finance use cases. The cost gap is now absurd: GPT-4o charges ~$0.015/1k tokens. DeepSeek local = $0.00 . For high-volume use cases, this is a 50x cost reduction overnight. The ‘closed model moat’ argument is officially dead. Every startup bleeding $40k/month on OpenAI has a real migration path now. submitted by /u/Ok-Drama-6800
Originally posted by u/Ok-Drama-6800 on r/ArtificialInteligence
