论文标题
一个人,一个模型,一个世界:学习持续的用户表示而不忘记
One Person, One Model, One World: Learning Continual User Representation without Forgetting
论文作者
论文摘要
学习用户表示是有效的用户建模和个性化推荐系统的重要技术。现有方法通常通过在单独的数据上培训来得出每个任务的单个模型参数集。但是,同一用户的表示可能具有一些共同点,例如偏好和个性,即使在不同的任务中也是如此。因此,这些单独训练的表示形式在性能上可能是最佳的,并且在参数共享方面效率低下。 在本文中,我们深入研究研究,以不断地通过任务学习用户表示任务,从而在使用来自旧参数的部分参数的同时学习新任务。出现了一个新问题,因为当培训新任务时,以前学习的参数很可能会被修改,因此,基于人工神经网络(ANN)模型可能会失去其为训练有素的先前任务服务的能力,该问题被称为灾难性遗忘。为了解决此问题,我们提出\ emph {conure}第一个\下划线{con} tinual或终身,\下划线{u} ser \ ser \下划线{re}演示者 - 即,随着时间的流逝,学习新任务,而无需忘记旧的任务。具体而言,我们建议在深度用户表示模型中迭代删除旧任务的重要权重,这是由于神经网络模型通常过度参数化的事实。通过这种方式,我们可以通过重复重要权重,并修改不太重要的权重以适应新任务来学习许多任务。我们在两个具有九个任务的现实世界数据集上进行了广泛的实验,并表明\ emph {conure}在很大程度上超过了并非故意保留这样旧的“知识”的标准模型,并且比每项任务或通过合并所有任务数据对每个任务进行单独培训的模型进行竞争性或有时更好。
Learning user representations is a vital technique toward effective user modeling and personalized recommender systems. Existing approaches often derive an individual set of model parameters for each task by training on separate data. However, the representation of the same user potentially has some commonalities, such as preference and personality, even in different tasks. As such, these separately trained representations could be suboptimal in performance as well as inefficient in terms of parameter sharing. In this paper, we delve on research to continually learn user representations task by task, whereby new tasks are learned while using partial parameters from old ones. A new problem arises since when new tasks are trained, previously learned parameters are very likely to be modified, and as a result, an artificial neural network (ANN)-based model may lose its capacity to serve for well-trained previous tasks forever, this issue is termed catastrophic forgetting. To address this issue, we present \emph{Conure} the first \underline{con}tinual, or lifelong, \underline{u}ser \underline{re}presentation learner -- i.e., learning new tasks over time without forgetting old ones. Specifically, we propose iteratively removing less important weights of old tasks in a deep user representation model, motivated by the fact that neural network models are usually over-parameterized. In this way, we could learn many tasks with a single model by reusing the important weights, and modifying the less important weights to adapt to new tasks. We conduct extensive experiments on two real-world datasets with nine tasks and show that \emph{Conure} largely exceeds the standard model that does not purposely preserve such old "knowledge", and performs competitively or sometimes better than models which are trained either individually for each task or simultaneously by merging all task data.