论文标题

建议在建议中为候选的典型对比学习和适应性兴趣选择

Prototypical Contrastive Learning and Adaptive Interest Selection for Candidate Generation in Recommendations

论文作者

Li, Ningning, Li, Qunwei, Ding, Xichen, Chen, Shaohu, Zhong, Wenliang

论文摘要

深层候选人的生成在大规模推荐系统中起着重要作用。它将用户历史记录行为作为输入,并学习候选人生成的用户和项目潜在嵌入。在文献中,传统方法遇到了两个问题。首先,用户有多个嵌入以反映各种兴趣,并且该数字是固定的。但是,考虑到不同级别的用户活跃性,固定数量的利息嵌入是优化的。例如,对于活跃的用户而言,与活跃用户相比,他们可能需要更少的嵌入来代表其兴趣。其次,负样本通常是由没有观察到的监督的策略产生的,类似项目可能具有不同的标签。这样的问题称为班级碰撞。在本文中,我们旨在推进典型的两位抗DNN候选生成模型。具体而言,自适应兴趣选择层旨在根据其活跃性的水平以端到端的方式自适应地学习用户嵌入的数量。此外,我们提出了一个典型的对比学习模块,以解决负抽样引入的类碰撞问题。广泛的实验评估表明,所提出的方案在多个基准上的表现明显优于竞争基准。

Deep Candidate Generation plays an important role in large-scale recommender systems. It takes user history behaviors as inputs and learns user and item latent embeddings for candidate generation. In the literature, conventional methods suffer from two problems. First, a user has multiple embeddings to reflect various interests, and such number is fixed. However, taking into account different levels of user activeness, a fixed number of interest embeddings is sub-optimal. For example, for less active users, they may need fewer embeddings to represent their interests compared to active users. Second, the negative samples are often generated by strategies with unobserved supervision, and similar items could have different labels. Such a problem is termed as class collision. In this paper, we aim to advance the typical two-tower DNN candidate generation model. Specifically, an Adaptive Interest Selection Layer is designed to learn the number of user embeddings adaptively in an end-to-end way, according to the level of their activeness. Furthermore, we propose a Prototypical Contrastive Learning Module to tackle the class collision problem introduced by negative sampling. Extensive experimental evaluations show that the proposed scheme remarkably outperforms competitive baselines on multiple benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源