Original Reddit post

I’m currently completing my philsophy degree at a good uni in the UK, and am working on a final exam surrounding the philosophy of language. There was a concept I was unsure of, so bullet pointed summed it up and put it through my prefferred chat bot which re-summarised it and gave a counter argument. My orginal idea was correct and I will now properly research the counter-argument. The issue is many students will not have the perserveance to do the first step and will go to X model straight away and ask for an explanation, and will probably fail to understand it. We also know models, especially the free ones, often hullicinate and will give false information. Sadly, LLMs could be a great tool, alongside lectuers to quickly clarify answers but I don’t think we have the self-restriant to allow them to be just that. Here is where anti-intellectualism swoops in. This makes me feel shit for using LLMs as a tool to begin with. submitted by /u/thedeadenddolls

Originally posted by u/thedeadenddolls on r/ArtificialInteligence