论文标题
依次推荐的模棱两可的对比度学习
Equivariant Contrastive Learning for Sequential Recommendation
论文作者
论文摘要
对比度学习(CL)有益于使用信息丰富的自我判断信号的顺序推荐模型的培训。现有解决方案采用一般顺序数据增强策略来产生正面并鼓励其表示形式不变。但是,由于用户行为序列的固有属性,某些增强策略(例如项目替代)可能会导致用户意图的变化。为所有增强策略学习不加区分的代表可能是次优的。因此,我们提出了对顺序推荐(ECL-SR)的模棱两可的对比度学习,该学习赋予了SR模型具有很大的判别能力,从而使学习的用户行为表征对侵入性增强敏感(例如项目替代)敏感,并且对轻度增强效果不敏感(例如,特征级辍学掩盖)。详细说明,我们使用条件歧视器来捕获因项目替代而引起的行为差异,这鼓励用户行为编码器是侵入性增强的等效性。四个基准数据集的全面实验表明,与最先进的SR模型相比,所提出的ECL-SR框架可实现竞争性能。源代码可在https://github.com/tokkiu/ecl上找到。
Contrastive learning (CL) benefits the training of sequential recommendation models with informative self-supervision signals. Existing solutions apply general sequential data augmentation strategies to generate positive pairs and encourage their representations to be invariant. However, due to the inherent properties of user behavior sequences, some augmentation strategies, such as item substitution, can lead to changes in user intent. Learning indiscriminately invariant representations for all augmentation strategies might be suboptimal. Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e.g., item substitution) and insensitive to mild augmentations (e.g., featurelevel dropout masking). In detail, we use the conditional discriminator to capture differences in behavior due to item substitution, which encourages the user behavior encoder to be equivariant to invasive augmentations. Comprehensive experiments on four benchmark datasets show that the proposed ECL-SR framework achieves competitive performance compared to state-of-the-art SR models. The source code is available at https://github.com/Tokkiu/ECL.