I have not interacted with chatbots other than the typically useless customer support ones, and my days of having to write school papers and the technical and marketing ones required by past jobs are in the past. I have a pretty good understanding of the token-based statistical LLM approach and how online content is hoovered up and re-assimilated. I have read of where school students have run afoul of assignment guidelines when using chatbots instead of writing papers on their own and of chatbots proffering incorrect information. It seems to me that any text generated by chatbots must be verified/cross checked in order to have a high degree of confidence in its output. This is mainly curiosity on my part as I do not plan to use chatbots and have gone as far as to add a browser extension to suppress the google AI Overview. Verifying the output takes as much time as doing it the old fashioned way so it doesn’t gain me anything. Are there any of you that have to deal with this challenge, and how do you handle it? submitted by /u/BFTSPK
Originally posted by u/BFTSPK on r/ArtificialInteligence
