Original Reddit post

One concept that quietly sits at the centre of modern AI is entropy. in information theory, entropy measures uncertainty in a system. The more unpredictable something is, the higher its entropy. What’s interesting is that modern machine learning systems, especially neural networks and language models are fundamentally trained around this concept. Training often involves minimizing cross-entropy loss, which essentially measures how different the model’s predicted probabilities are from the actual outcomes. In simple terms, models learn by reducing uncertainty about what comes next. Here’s the part that made it click for me while researching AI history: It’s kind of fascinating honestly that such a fundamental idea, uncertainty and information, sits underneath so many modern AI systems. submitted by /u/ocean_protocol

Originally posted by u/ocean_protocol on r/ArtificialInteligence