论文标题

解释不要忘记:防御灾难性的遗忘Xai

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

论文作者

Ede, Sami, Baghdadlian, Serop, Weber, Leander, Nguyen, An, Zanca, Dario, Samek, Wojciech, Lapuschkin, Sebastian

论文摘要

像人类一样自然而然地处理和保留新信息的能力是在训练神经网络时备受追捧的壮举。不幸的是,传统优化算法通常需要在培训时间和更新WRT期间可用的大量数据。培训过程完成后,新数据很困难。实际上,当出现新数据或任务时,由于神经网络容易遭受灾难性遗忘,因此可能会丢失先前的进展。灾难性遗忘描述了当神经网络在获得新信息时完全忘记以前的知识时,这种现象。我们提出了一种新颖的培训算法,称为培训,通过解释我们利用层面相关性传播的方法,以保留神经网络在培训新数据时已经在先前任务中学习的信息。该方法在一系列基准数据集以及更复杂的数据上进行评估。我们的方法不仅成功地保留了神经网络中旧任务的知识,而且比其他最先进的解决方案更有效地进行了资源。

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks. Unfortunately, the traditional optimization algorithms often require large amounts of data available during training time and updates wrt. new data are difficult after the training process has been completed. In fact, when new data or tasks arise, previous progress may be lost as neural networks are prone to catastrophic forgetting. Catastrophic forgetting describes the phenomenon when a neural network completely forgets previous knowledge when given new information. We propose a novel training algorithm called training by explaining in which we leverage Layer-wise Relevance Propagation in order to retain the information a neural network has already learned in previous tasks when training on new data. The method is evaluated on a range of benchmark datasets as well as more complex data. Our method not only successfully retains the knowledge of old tasks within the neural networks but does so more resource-efficiently than other state-of-the-art solutions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源