论文标题

面具和原因:用于复杂逻辑查询的预训练知识图形变压器

Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries

论文作者

Liu, Xiao, Zhao, Shiyu, Su, Kai, Cen, Yukuo, Qiu, Jiezhong, Zhang, Mengdi, Wu, Wei, Dong, Yuxiao, Tang, Jie

论文摘要

知识图(kg)嵌入是一种主流方法,用于推理不完整的kg。但是,受其固有浅层和静态体系结构的限制,它们几乎无法处理对复杂逻辑查询的不断上升,这些查询包括逻辑运算符,估算的边缘,多个源实体和未知的中间实体。在这项工作中,我们通过掩盖的预训练和微调策略介绍了知识图变压器(kgtransformer)。我们设计了一种kg三重变换方法,使变压器能够处理kg,这是通过稀疏(MOE)稀疏激活的混合物进一步增强的。然后,我们将复杂的逻辑查询作为掩盖预测提出,并引入了两阶段掩盖的预训练策略,以提高可传递性和概括性。在两个基准上进行的广泛实验表明,KGTRANSFORMER可以在9个内域和室外推理任务上始终超过基于KG的基准和高级编码。此外,KGTRANSFORMER可以通过提供解释给定答案的完整推理路径来解释性。

Knowledge graph (KG) embeddings have been a mainstream approach for reasoning over incomplete KGs. However, limited by their inherently shallow and static architectures, they can hardly deal with the rising focus on complex logical queries, which comprise logical operators, imputed edges, multiple source entities, and unknown intermediate entities. In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies. We design a KG triple transformation method to enable Transformer to handle KGs, which is further strengthened by the Mixture-of-Experts (MoE) sparse activation. We then formulate the complex logical queries as masked prediction and introduce a two-stage masked pre-training strategy to improve transferability and generalizability. Extensive experiments on two benchmarks demonstrate that kgTransformer can consistently outperform both KG embedding-based baselines and advanced encoders on nine in-domain and out-of-domain reasoning tasks. Additionally, kgTransformer can reason with explainability via providing the full reasoning paths to interpret given answers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源