论文标题
知识是平坦的:用于各种知识图完成的SEQ2SEQ生成框架
Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge Graph Completion
论文作者
论文摘要
知识图完成(KGC)最近已扩展到多个知识图(KG)结构,启动了新的研究方向,例如静态kgc,时间kgc和少量kgc。以前的作品通常设计了与特定图形结构紧密相结合的KGC模型,这不可避免地会导致两个缺点:1)结构特异性的KGC模型是相互不相容的; 2)现有的kgc方法不适合新兴kg。在本文中,我们提出了KG-S2S,这是一种SEQ2SEQ生成框架,可以通过将KG事实的表示为“平坦”文本,无论其原始形式如何,可以通过将KG事实的表示来解决不同的语言图形结构。为了纠正“平面”文本的KG结构信息损失,我们进一步改善了实体和关系的输入表示,以及KG-S2中的推理算法。五个基准测试的实验表明,KG-S2S的表现优于许多竞争基线,从而创造了新的最先进的性能。最后,我们分析了KG-S2S在不同关系和非实体世代上的能力。
Knowledge Graph Completion (KGC) has been recently extended to multiple knowledge graph (KG) structures, initiating new research directions, e.g. static KGC, temporal KGC and few-shot KGC. Previous works often design KGC models closely coupled with specific graph structures, which inevitably results in two drawbacks: 1) structure-specific KGC models are mutually incompatible; 2) existing KGC methods are not adaptable to emerging KGs. In this paper, we propose KG-S2S, a Seq2Seq generative framework that could tackle different verbalizable graph structures by unifying the representation of KG facts into "flat" text, regardless of their original form. To remedy the KG structure information loss from the "flat" text, we further improve the input representations of entities and relations, and the inference algorithm in KG-S2S. Experiments on five benchmarks show that KG-S2S outperforms many competitive baselines, setting new state-of-the-art performance. Finally, we analyze KG-S2S's ability on the different relations and the Non-entity Generations.