论文标题
FRSUM:通过增强事实鲁棒性来忠实的抽象摘要
FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness
论文作者
论文摘要
尽管能够产生流利和语法文本,但当前的SEQ2SEQ摘要模型仍然遭受了不忠的一代问题的困扰。在本文中,我们从事实鲁棒性的新角度研究了现有系统的忠诚,这是能够根据对抗性的不忠信息正确生成事实信息的能力。我们首先通过其成功率来衡量模型的事实鲁棒性,以防止在产生事实信息时防止对抗性攻击。对当前系统的广泛的事实鲁棒性分析表明,它与人类对忠诚的判断的良好一致性。受这些发现的启发,我们建议通过增强其事实鲁棒性来提高模型的忠诚。具体来说,我们提出了一种新颖的培训策略,即FRSUM,该策略教导该模型抗衡明确的对抗样本和隐性的事实对抗性扰动。广泛的自动和人类评估结果表明,FRSUM始终提高各种SEQ2SEQ模型的忠诚,例如T5,Bart。
Despite being able to generate fluent and grammatical text, current Seq2Seq summarization models still suffering from the unfaithful generation problem. In this paper, we study the faithfulness of existing systems from a new perspective of factual robustness which is the ability to correctly generate factual information over adversarial unfaithful information. We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information. The factual robustness analysis on a wide range of current systems shows its good consistency with human judgments on faithfulness. Inspired by these findings, we propose to improve the faithfulness of a model by enhancing its factual robustness. Specifically, we propose a novel training strategy, namely FRSUM, which teaches the model to defend against both explicit adversarial samples and implicit factual adversarial perturbations. Extensive automatic and human evaluation results show that FRSUM consistently improves the faithfulness of various Seq2Seq models, such as T5, BART.