论文标题

图形到文本生成的自我监督的图形掩模预训练

Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

论文作者

Han, Jiuzhou, Shareghi, Ehsan

论文摘要

大规模的预训练语言模型(PLM)通过处理图形的线性化版本具有高级的图形对文本(G2T)生成。但是,已知线性化忽略了结构信息。此外,PLM通常是在自由文本上进行预训练的,该文本引入了预训练和下游G2T生成任务之间的域不匹配。为了解决这些缺点,我们提出了图形掩盖训练策略,这些策略既不需要监督信号,也不需要调整基础预训练的编码器模型的体系结构。当与预训练的T5一起使用时,我们的方法可在WebNLG+2020和EventNarrative G2T生成数据集上获得新的最新结果。我们的方法还表明在低资源环境中非常有效。

Large-scale pre-trained language models (PLMs) have advanced Graph-to-Text (G2T) generation by processing the linearised version of a graph. However, the linearisation is known to ignore the structural information. Additionally, PLMs are typically pre-trained on free text which introduces domain mismatch between pre-training and downstream G2T generation tasks. To address these shortcomings, we propose graph masking pre-training strategies that neither require supervision signals nor adjust the architecture of the underlying pre-trained encoder-decoder model. When used with a pre-trained T5, our approach achieves new state-of-the-art results on WebNLG+2020 and EventNarrative G2T generation datasets. Our method also shows to be very effective in the low-resource setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源