论文标题
通过损失截断改善了自然语言的产生
Improved Natural Language Generation via Loss Truncation
论文作者
论文摘要
通常对神经语言模型进行训练,以通过最大程度地减少对数损失来匹配大规模语料库的分布性能。虽然可以直接进行优化,但这种方法迫使模型重现数据集中的所有变化,包括嘈杂和无效的参考文献(例如,误导和幻觉的事实)。更糟糕的是,常用的对数损失对这种现象过于敏感,甚至一小部分嘈杂的数据都会降低性能。在这项工作中,我们表明模型和参考的区分性是处理无效参考的原则性且可靠的替代方案。为了优化可区分性,我们提出损失截断,该损失截断可以自适应地消除训练期间的高损失例子。我们表明,这很容易优化,就像对数损耗一样,并且在噪声下的区分性紧密。从经验上讲,我们证明,损失截断的表现优于摘要任务上的区分性的现有基准,并证明由损失截断模型生成的样品具有超过基线和匹配人类参考的事实准确性评级。
Neural language models are usually trained to match the distributional properties of a large-scale corpus by minimizing the log loss. While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references (e.g., misannotation and hallucinated facts). Worse, the commonly used log loss is overly sensitive to such phenomena and even a small fraction of noisy data can degrade performance. In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references. To optimize distinguishability, we propose loss truncation, which adaptively removes high loss examples during training. We show this is as easy to optimize as log loss and tightly bounds distinguishability under noise. Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task, and show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.