Would the ability for AI models to unlearn topics/symbolic meanings/temporary variable assignments be useful? Let’s say a model trains on the Wikipedia dataset, but then a page is changed after training occurs because the page had incorrect data. Your model is now stuck with this incorrect knowledge now and will have to undergo retraining or simply learning progressively that there is a change in the data which slowly changes the weights and it shifts over to being correct over time. What would it look like to have the ability to quickly unlearn the old data once validated as being incorrect, how would this help AI models in the future, and how might this be achieved when building a new type of neural network? To go extreme, let’s say the model was fed data about a topic that was purposefully wrong, such as a satire post but was misinterpreted by social media and went mainstream such that it now has impacted user’s understanding of said topic. Then the truth comes out and corrects this viewpoint, and with this ability to unlearn the old data quickly, it can be trained on the new, correct data where it can immediately respond to people with the correct information as to better prevent the spread of misinformation. This would be crucial when topics of war, cyberattacks, and even physical health get corrupted by posts where people have jumped the gun and claimed something that then gets popularity and thus traction that becomes reality to those that fall victim to the misinformation they took in. What would it look like to have the ability to quickly unlearn old data once validated as being incorrect? How would this help people in a production environment where the need for accurate up-to-date information is critical? How would this help AI models in the future? How might this be achieved when building a new type of neural network? submitted by /u/DutytoDevelop
Originally posted by u/DutytoDevelop on r/ArtificialInteligence
