I am trying to learn about computer science. So I picked up a book and they started talking about transistors and such things. So as is the case with learning sometimes, you might end uo going down a few rabbit holes. I dod and I ended trying to learn about AI too. I was doing a vit of research which at one point prompted this response: ‘AI’s biggest mysteries center on the “black box” nature of deep learning, where even creators cannot fully explain how systems reach specific decisions. Yes, many advanced AI systems—particularly deep learning and large language models—are considered “black boxes” because while users know the inputs and outputs, the internal, complex decision-making process is largely uninterpretable. They function by identifying complex patterns, making it difficult to understand exactly why a specific result was produced.’ That is really freaky, right? submitted by /u/Hopeful_Adeptness964
Originally posted by u/Hopeful_Adeptness964 on r/ArtificialInteligence
