Hi guys, I would like to test an LLM locally for some reasons: To keep my projects protected, I use GitHub copilot a lot due to my student’s license To save money To learn, diving into an unknown field for me, that is, literally “installing” a LLM, optimize and fine-tuning it The main challenge is not the installation itself, I found out that is easy through Ollama and similar tools, but the computing power I have two machines: a PC with Core i5-10400F, 24 Gb of RAM DDR4 and RTX 3070 8 Gb VRAM and a MacBook Pro M1 16 Gb RAM and 1 Tb SSD I’m aware that 8 Gb of VRAM is insufficient for a useful model, but there’s any workaround? My Mac has unified memory, in other words, I can take advantage of his big SSD to run a model with higher parameters. Am I wrong? What model do you guys use? I saw that MiniMax 2.5 and GLM-5 are performing very well How do you guys suggest me to start? Or this is impossible due to my weak machines? submitted by /u/Dependent-Juice-874
Originally posted by u/Dependent-Juice-874 on r/ArtificialInteligence
