Some ways to improve the performance of LLMs on particular domains, and I’d love to hear what’s actually working. Are people finding that full fine-tuning, LoRA, RAG, and prompt engineering are delivering the goods, and what datasets are you using and how are you evaluating them? Trying to separate the hype from reality. submitted by /u/Tech_us_Inc
Originally posted by u/Tech_us_Inc on r/ArtificialInteligence
You must log in or # to comment.
