论文标题
粗到1的稀疏顺序推荐
Coarse-to-Fine Sparse Sequential Recommendation
论文作者
论文摘要
顺序建议旨在通过历史互动来建模动态用户行为。事实证明,自我激烈的方法在捕获短期动态和长期偏好方面有效。尽管它们取得了成功,但这些方法仍然很难为稀疏数据建模,他们很难在这些数据上学习高质量的项目表示。我们建议从购物意图中对用户动态进行建模,并同时进行交互项目。学习的意图是粗粒的,可以作为项目建议的先验知识。为此,我们提出了一个粗到精细的自我发项框架,即咖啡馆,该框架明确地学习了粗粒和细粒的顺序动力学。具体而言,咖啡馆首先从粗粒序列中学习意图,这些序列是密集的,因此提供了高质量的用户意图表示。然后,咖啡馆将意图表示形式融合到项目编码器输出中,以获得改进的项目表示形式。最后,我们根据项目的表示和相应的意图推断推荐的项目。稀疏数据集的实验表明,咖啡馆的表现平均超过44.03%的NDCG@5。
Sequential recommendation aims to model dynamic user behavior from historical interactions. Self-attentive methods have proven effective at capturing short-term dynamics and long-term preferences. Despite their success, these approaches still struggle to model sparse data, on which they struggle to learn high-quality item representations. We propose to model user dynamics from shopping intents and interacted items simultaneously. The learned intents are coarse-grained and work as prior knowledge for item recommendation. To this end, we present a coarse-to-fine self-attention framework, namely CaFe, which explicitly learns coarse-grained and fine-grained sequential dynamics. Specifically, CaFe first learns intents from coarse-grained sequences which are dense and hence provide high-quality user intent representations. Then, CaFe fuses intent representations into item encoder outputs to obtain improved item representations. Finally, we infer recommended items based on representations of items and corresponding intents. Experiments on sparse datasets show that CaFe outperforms state-of-the-art self-attentive recommenders by 44.03% NDCG@5 on average.