论文标题

将偏见的利润纳入协作过滤的对比损失

Incorporating Bias-aware Margins into Contrastive Loss for Collaborative Filtering

论文作者

Zhang, An, Ma, Wenchang, Wang, Xiang, Chua, Tat-Seng

论文摘要

协作过滤(CF)模型很容易遭受流行性偏差,这使建议偏离用户的实际偏好。但是,当前的大多数陈述策略都容易在头部和尾部性能之间进行权衡游戏,因此不可避免地会降低整体建议精度。为了减少普及性偏见对CF模型的负面影响,我们将偏见的利润率纳入对比损失中,并提出了一个简单而有效的BC损失,在该损失中,保证金对每个用户项目相互作用的偏置程度定量量身定制。我们研究了BC损失的几何解释,然后进一步可视化并从理论上证明它通过鼓励相似用户/项目的紧凑性并扩大不同用户/物品的分散来同时学习更好的头部和尾巴表示。在八个基准数据集中,我们使用BC损失来优化两个高性能的CF模型。在各种评估设置(即,不平衡/平衡,时间拆分,完全观察的无偏见,尾部/头部测试评估)上,BC损失的表现优于最先进的偏见和非陈述方法,并具有显着的改进。考虑到卑诗省损失的理论保证和经验成功,我们主张不仅将其用作辩护策略,而且还用作推荐模型的标准损失。

Collaborative filtering (CF) models easily suffer from popularity bias, which makes recommendation deviate from users' actual preferences. However, most current debiasing strategies are prone to playing a trade-off game between head and tail performance, thus inevitably degrading the overall recommendation accuracy. To reduce the negative impact of popularity bias on CF models, we incorporate Bias-aware margins into Contrastive loss and propose a simple yet effective BC Loss, where the margin tailors quantitatively to the bias degree of each user-item interaction. We investigate the geometric interpretation of BC loss, then further visualize and theoretically prove that it simultaneously learns better head and tail representations by encouraging the compactness of similar users/items and enlarging the dispersion of dissimilar users/items. Over eight benchmark datasets, we use BC loss to optimize two high-performing CF models. On various evaluation settings (i.e., imbalanced/balanced, temporal split, fully-observed unbiased, tail/head test evaluations), BC loss outperforms the state-of-the-art debiasing and non-debiasing methods with remarkable improvements. Considering the theoretical guarantee and empirical success of BC loss, we advocate using it not just as a debiasing strategy, but also as a standard loss in recommender models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源