论文标题
在极端记忆约束下的在线持续学习
Online Continual Learning under Extreme Memory Constraints
论文作者
论文摘要
持续学习(CL)旨在发展代理人,模仿人类依次学习新任务的能力,同时能够保留从过去的经验中获得的知识。在本文中,我们介绍了记忆约束在线持续学习(MC-OCL)的新颖问题,该问题对可能算法可以使用的内存限制了严格的限制,以避免灾难性的遗忘。大多数(如果不是全部),先前的CL方法违反了这些限制,我们提出了一种基于正则化的CL方法的MC-OCL:批处理水平蒸馏(BLD)的算法解决方案,该方法可以有效地平衡稳定性和可塑性,以便从数据流中学习,同时通过蒸馏来解决旧任务。我们对三个公开基准进行的广泛的实验评估在经验上表明,我们的方法成功解决了MC-OCL问题,并达到了与先前需要更高内存开销的先前蒸馏方法相当的精度。
Continual Learning (CL) aims to develop agents emulating the human ability to sequentially learn new tasks while being able to retain knowledge obtained from past experiences. In this paper, we introduce the novel problem of Memory-Constrained Online Continual Learning (MC-OCL) which imposes strict constraints on the memory overhead that a possible algorithm can use to avoid catastrophic forgetting. As most, if not all, previous CL methods violate these constraints, we propose an algorithmic solution to MC-OCL: Batch-level Distillation (BLD), a regularization-based CL approach, which effectively balances stability and plasticity in order to learn from data streams, while preserving the ability to solve old tasks through distillation. Our extensive experimental evaluation, conducted on three publicly available benchmarks, empirically demonstrates that our approach successfully addresses the MC-OCL problem and achieves comparable accuracy to prior distillation methods requiring higher memory overhead.