论文标题

关系提取与解释

Relation Extraction with Explanation

论文作者

Shahbazi, Hamed, Fern, Xiaoli Z., Ghaeini, Reza, Tadepalli, Prasad

论文摘要

通过学习对句子的重要性权重来减轻袋中无关的句子的影响,以减轻袋中无关的句子的影响。迄今为止的努力集中在提高提取准确性上,但对它们的解释性知之甚少。在这项工作中,我们注释了一个带有基础句子级解释的测试集,以评估关系提取模型提供的解释质量。我们证明,用句子中的实体替换其细粒实体类型不仅提高了提取准确性,还可以改善解释。我们还建议自动产生“干扰器”句子,以增强袋子并训练模型以忽略干扰物。对广泛使用的FB-NYT数据集的评估表明,我们的方法达到了新的最新精度,同时提高了模型的解释性。

Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explainability. In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models. We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation. We also propose to automatically generate "distractor" sentences to augment the bags and train the model to ignore the distractors. Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explainability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源