With all the massive spending from big tech on GPUs and data centres, the goal is to train and deploy LLMs? Haven’t we already plateaued in terms of LLM improvement? Will all these new infrastructures make any improvements? submitted by /u/bubugugu
Originally posted by u/bubugugu on r/ArtificialInteligence
You must log in or # to comment.
