论文标题

通过可回收的遗忘学习

Learning with Recoverable Forgetting

论文作者

Ye, Jingwen, Fu, Yifang, Song, Jie, Yang, Xingyi, Liu, Songhua, Jin, Xin, Song, Mingli, Wang, Xinchao

论文摘要

终身学习旨在学习一系列任务,而无需忘记先前获得的知识。但是,由于隐私或版权原因,涉及的培训数据可能不是终身合法的。例如,在实际情况下,模型所有者可能希望不时启用或禁用特定任务或特定样本的知识。不幸的是,这种灵活的对知识转移的灵活控制在以前的增量或减少学习方法中,即使在问题设定的水平上也被忽略了。在本文中,我们探讨了一种新颖的学习方案,称为学习的学习方案(LIRF),该方案明确处理任务或特定于样本的知识去除和恢复。具体而言,LIRF带来了两个创新的方案,即知识存款和提款,这允许将用户指定的知识与预训练的网络隔离,并在必要时将其注入。在知识存款过程中,从目标网络中提取了指定的知识并存储在存款模块中,同时保留了目标网络的不敏感或一般知识,并进一步增强。在知识提取期间,将带走知识添加回目标网络。存款和提取过程仅要求在删除数据上进行几个填充时期,从而确保数据和时间效率。我们在几个数据集上进行实验,并证明所提出的LIRF策略具有令人振奋的概括能力。

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge. However, the involved training data may not be life-long legitimate due to privacy or copyright reasons. In practical scenarios, for instance, the model owner may wish to enable or disable the knowledge of specific tasks or specific samples from time to time. Such flexible control over knowledge transfer, unfortunately, has been largely overlooked in previous incremental or decremental learning methods, even at a problem-setup level. In this paper, we explore a novel learning scheme, termed as Learning wIth Recoverable Forgetting (LIRF), that explicitly handles the task- or sample-specific knowledge removal and recovery. Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal, which allow for isolating user-designated knowledge from a pre-trained network and injecting it back when necessary. During the knowledge deposit process, the specified knowledge is extracted from the target network and stored in a deposit module, while the insensitive or general knowledge of the target network is preserved and further augmented. During knowledge withdrawal, the taken-off knowledge is added back to the target network. The deposit and withdraw processes only demand for a few epochs of finetuning on the removal data, ensuring both data and time efficiency. We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源