论文标题

多任务推荐中的跨任务知识蒸馏

Cross-Task Knowledge Distillation in Multi-Task Recommendation

论文作者

Yang, Chenxiao, Pan, Junwei, Gao, Xiaofeng, Jiang, Tingyu, Liu, Dapeng, Chen, Guihai

论文摘要

多任务学习(MTL)已在推荐系统中广泛使用,其中预测项目的每种类型的用户反馈(例如,单击,购买)被视为单个任务,并通过统一模型共同培训。我们的主要观察结果是,每个任务的预测结果可能包含有关用户对项目的细粒度偏好的特定任务知识。尽管可以转移此类知识以使其他任务受益,但在当前的MTL范式下它被忽略了。相反,本文提出了一个交叉任务知识蒸馏框架,该框架试图利用一个任务作为监督信号的预测结果来教授另一个任务。但是,由于几个挑战,任务冲突,不一致的幅度和同步优化的要求,以适当的方式整合MTL和KD是不平凡的。作为对策,我们1)引入具有四倍体损失功能的辅助任务,以捕获交叉任务的细粒度排名信息并避免任务冲突,2)设计一种校准的蒸馏方法,以使辅助任务中的知识对齐和蒸馏知识,以及3)提出一种新的错误校正机制,以启用和启用与师生模型和学生模型的培训。进行了全面的实验,以验证我们在实际数据集中框架的有效性。

Multi-task learning (MTL) has been widely used in recommender systems, wherein predicting each type of user feedback on items (e.g, click, purchase) are treated as individual tasks and jointly trained with a unified model. Our key observation is that the prediction results of each task may contain task-specific knowledge about user's fine-grained preference towards items. While such knowledge could be transferred to benefit other tasks, it is being overlooked under the current MTL paradigm. This paper, instead, proposes a Cross-Task Knowledge Distillation framework that attempts to leverage prediction results of one task as supervised signals to teach another task. However, integrating MTL and KD in a proper manner is non-trivial due to several challenges including task conflicts, inconsistent magnitude and requirement of synchronous optimization. As countermeasures, we 1) introduce auxiliary tasks with quadruplet loss functions to capture cross-task fine-grained ranking information and avoid task conflicts, 2) design a calibrated distillation approach to align and distill knowledge from auxiliary tasks, and 3) propose a novel error correction mechanism to enable and facilitate synchronous training of teacher and student models. Comprehensive experiments are conducted to verify the effectiveness of our framework in real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源