论文标题

多跳跃知识基础问题回答的子图检索增强模型

Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering

论文作者

Zhang, Jing, Zhang, Xiaokang, Yu, Jifan, Tang, Jian, Tang, Jie, Li, Cuiping, Chen, Hong

论文摘要

关于知识基础问题回答(KBQA)的最新著作检索了子图,以便于推理。所需的子图至关重要,因为很小的子图可能会排除答案,但很大的答案可能会引入更多的声音。但是,现有的检索要么是启发式的,要么与推理交织在一起,从而导致部分子图的推理,这在缺少中间监督时会增加推理偏见。本文提出了一个可训练的子图检索器(SR)与后续推理过程分离,该过程使插件框架可以增强任何面向子图的KBQA模型。广泛的实验表明,与现有的检索方法相比,SR的检索和质量检查的性能要好得多。通过弱监督的预训练以及端到端微调,SRL与基于嵌入基于嵌入的KBQA方法NSM结合使用NSM结合使用NSM时实现了新的最先进的性能。

Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. A desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SRl achieves new state-of-the-art performance when combined with NSM, a subgraph-oriented reasoner, for embedding-based KBQA methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源