论文标题

注意标签:用验证模型描述知识图中的关系

Mind the Labels: Describing Relations in Knowledge Graphs With Pretrained Models

论文作者

Kasner, Zdeněk, Konstas, Ioannis, Dušek, Ondřej

论文摘要

用于数据到文本(D2T)生成的验证语言模型(PLM)可以使用人类可读的数据标签,例如列标题,键或关系名称,以推广到室外示例。但是,如果这些标签模棱两可或不完整,这些模型在产生语义上不准确的输出方面是众所周知的,在D2T数据集中通常是这种情况。在本文中,我们将这个问题暴露在两个实体之间的关系的任务上。在我们的实验中,我们收集了一个新颖的数据集,用于从三个大规模知识图(Wikidata,dbpedia,yago)中说出一组1,522个独特关系。我们发现,尽管D2T生成的PLM在不清楚的情况下预计失败了,但经过多种关系标签训练的模型在表达新颖的,看不见的关系方面表现出色。我们认为,使用具有各种清晰且有意义的标签的数据是训练能够概括为新领域的D2T生成系统的关键。

Pretrained language models (PLMs) for data-to-text (D2T) generation can use human-readable data labels such as column headings, keys, or relation names to generalize to out-of-domain examples. However, the models are well-known in producing semantically inaccurate outputs if these labels are ambiguous or incomplete, which is often the case in D2T datasets. In this paper, we expose this issue on the task of descibing a relation between two entities. For our experiments, we collect a novel dataset for verbalizing a diverse set of 1,522 unique relations from three large-scale knowledge graphs (Wikidata, DBPedia, YAGO). We find that although PLMs for D2T generation expectedly fail on unclear cases, models trained with a large variety of relation labels are surprisingly robust in verbalizing novel, unseen relations. We argue that using data with a diverse set of clear and meaningful labels is key to training D2T generation systems capable of generalizing to novel domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源