Hey everyone, we just open-sourced DharmaOCR on Hugging Face. Models and datasets are all public, free to use and experiment with. We also published the paper documenting all the experimentation behind it, for those who want to dig into the methodology. We fine-tuned open-source SLMs (3B and 7B parameters) using SFT + DPO and ran them against GPT-5.4, Gemini 3.1 Pro, Claude Opus 4.6, Google Document AI, and open-source alternatives like OlmOCR, Deepseek-OCR, GLMOCR, and Qwen3.
- The specialized models came out on top: 0.925 (7B) and 0.911 (3B).
- DPO using the model’s own degenerate outputs as rejected examples cut the failure rate by 87.6%.
- AWQ quantization drops per-page inference cost ~22%, with insignificant effect on performance. Models & datasets: https://huggingface.co/Dharma-AI Full paper: https://arxiv.org/abs/2604.14314 Paper summary: https://gist.science/paper/2604.14314 submitted by /u/augusto_camargo3
Originally posted by u/augusto_camargo3 on r/ArtificialInteligence
You must log in or # to comment.
