Hi! My question is not so much about which new architecture or training advance has had the greatest impact on these models, but rather about how these models, and the way we interact with them, are changing how we think, work, and communicate with one another. I have noticed myself, for instance, that I rarely just google things anymore. Instead, I tend to rely on ChatGPT for research, because it often seems to find better results more quickly. It has also significantly changed the way I study, since I use it almost like a personal, always-available tutor. What I am wondering, then, is what the broader cultural impact of LLMs might be. On the one hand, some people may derive great value from them, especially for learning or exploring complex topics. On the other hand, others might simply let the models do the work for them, which could perhaps lead to a loss of mental sharpness or critical thinking. I also find it culturally interesting how we think about and describe these systems, since we seem to personify them quite a lot. Basically, I would be interested in anything you find surprising, relevant, or worth discussing in this context. submitted by /u/themoe_
Originally posted by u/themoe_ on r/ArtificialInteligence
