- cross-posted to:
- ai_reddit
- cross-posted to:
- ai_reddit
Abstract: Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information. We show how theoretical considerations and experimental results from philosophy, statistics, information theory, and machine learning argue against the thesis that LLMs are rational, information-generating entities. submitted by /u/Maybe-monad
Originally posted by u/Maybe-monad on r/ArtificialInteligence
You must log in or # to comment.
