论文标题
Metakg:在知识图上进行元学习,以进行冷启动推荐
MetaKG: Meta-learning on Knowledge Graph for Cold-start Recommendation
论文作者
论文摘要
知识图(kg)由一组相互连接的打字实体及其属性组成。最近,KG通常用作辅助信息,以使更准确,可解释和多样化的用户偏好建议。具体而言,现有的基于KG的建议方法目标建模来自kg中隐藏的长连接用户 - 项目相互作用的高阶关系/依赖关系。但是,他们中的大多数人都忽略了推荐分析的冷启动问题(即用户冷启动和物品冷启动),这在涉及新用户或新项目时限制了他们在方案中的性能。受元学习在稀缺训练样本中的成功启发,我们提出了一个新颖的基于元学习的框架,名为Metakg,该框架涵盖了一个合作感知的元学习者和知识吸引的元学习者,以捕获元用户的偏好和实体的知识。协作感知的元学习者旨在为每个用户偏好学习任务局部汇总用户偏好。相比之下,知识吸引的元学习者是在全球范围内概括不同用户偏好学习任务的知识表示。在两个元学习者的指导下,梅塔克(Metakg)可以有效地捕获高阶协作关系和语义表示,这很容易适应冷启动场景。此外,我们设计了一个新颖的自适应任务调度程序,可以自适应地选择元学习的信息任务,以防止模型被嘈杂的任务损坏。在各种冷启动方案上使用三个真实数据集进行的广泛实验表明,我们提出的Metakg在有效性,效率和可扩展性方面优于所有现有的最新竞争者。
A knowledge graph (KG) consists of a set of interconnected typed entities and their attributes. Recently, KGs are popularly used as the auxiliary information to enable more accurate, explainable, and diverse user preference recommendations. Specifically, existing KG-based recommendation methods target modeling high-order relations/dependencies from long connectivity user-item interactions hidden in KG. However, most of them ignore the cold-start problems (i.e., user cold-start and item cold-start) of recommendation analytics, which restricts their performance in scenarios when involving new users or new items. Inspired by the success of meta-learning on scarce training samples, we propose a novel meta-learning based framework called MetaKG, which encompasses a collaborative-aware meta learner and a knowledge-aware meta learner, to capture meta users' preference and entities' knowledge for cold-start recommendations. The collaborative-aware meta learner aims to locally aggregate user preferences for each user preference learning task. In contrast, the knowledge-aware meta learner is to globally generalize knowledge representation across different user preference learning tasks. Guided by two meta learners, MetaKG can effectively capture the high-order collaborative relations and semantic representations, which could be easily adapted to cold-start scenarios. Besides, we devise a novel adaptive task scheduler which can adaptively select the informative tasks for meta learning in order to prevent the model from being corrupted by noisy tasks. Extensive experiments on various cold-start scenarios using three real data sets demonstrate that our presented MetaKG outperforms all the existing state-of-the-art competitors in terms of effectiveness, efficiency, and scalability.