I don’t know how important the implications are, but it’s interesting. https://research.google/blog/teaching-llms-to-reason-like-bayesians/ “We tested a range of LLMs and found that they struggled to form and update probabilistic beliefs. We further found that continuing the LLMs’ training through exposure to interactions between users and the Bayesian Assistant — a model that implements the optimal probabilistic belief update strategy — dramatically improved the LLMs’ ability to approximate probabilistic reasoning. While our findings from our first experiment point to the limitations of particular LLMs, the positive findings of our subsequent fine-tuning experiments can be viewed as a demonstration of the strength of the LLM “post-training” paradigm more generally. By training the LLMs on demonstrations of the optimal strategy to perform the task, we were able to improve their performance considerably, suggesting that they learned to approximate the probabilistic reasoning strategy illustrated by the demonstrations. The LLMs were able to generalize this strategy to domains where it is difficult to encode it explicitly in a symbolic model, demonstrating the power of distilling a classic symbolic model into a neural network.” submitted by /u/AngleAccomplished865
Originally posted by u/AngleAccomplished865 on r/ArtificialInteligence
