论文标题
双链链接:通过对逆性学习变压器和提示从文本进行桥接归纳链接预测
Bi-Link: Bridging Inductive Link Predictions from Text via Contrastive Learning of Transformers and Prompts
论文作者
论文摘要
归纳知识图完成需要模型来理解关系的基本语义和逻辑模式。随着审慎的语言模型的发展,最近的研究设计了用于链接预测任务的变压器。然而,经验研究表明,线性化三元化会影响关系模式的学习,例如反转和对称性。在本文中,我们提出了BI-Link,这是一个具有概率语法的对比学习框架提示链接预测。使用BERT的语法知识,我们根据学到的句法模式有效地搜索关系提示,这些模式将其推广到大型知识图。为了更好地表达对称关系,我们设计了一个对称链路预测模型,建立了向前预测和向后预测之间的双向联系。这种双向链接在测试时适合灵活的自我汇集策略。在我们的实验中,BI-Link在链接预测数据集(WN18RR,FB15K-237和Wikidata5m)上的最新基线优于最近的基线。此外,我们将Zeshel-Ind构造为连接环境以评估双链接的内域电感实体。实验结果表明,我们的方法产生了可靠的表示,可以在域移动下推广。
Inductive knowledge graph completion requires models to comprehend the underlying semantics and logic patterns of relations. With the advance of pretrained language models, recent research have designed transformers for link prediction tasks. However, empirical studies show that linearizing triples affects the learning of relational patterns, such as inversion and symmetry. In this paper, we propose Bi-Link, a contrastive learning framework with probabilistic syntax prompts for link predictions. Using grammatical knowledge of BERT, we efficiently search for relational prompts according to learnt syntactical patterns that generalize to large knowledge graphs. To better express symmetric relations, we design a symmetric link prediction model, establishing bidirectional linking between forward prediction and backward prediction. This bidirectional linking accommodates flexible self-ensemble strategies at test time. In our experiments, Bi-Link outperforms recent baselines on link prediction datasets (WN18RR, FB15K-237, and Wikidata5M). Furthermore, we construct Zeshel-Ind as an in-domain inductive entity linking the environment to evaluate Bi-Link. The experimental results demonstrate that our method yields robust representations which can generalize under domain shift.