论文标题
q学习语言模型,用于基于编辑的无监督摘要
Q-learning with Language Model for Edit-based Unsupervised Summarization
论文作者
论文摘要
无监督的方法对于抽象性文本摘要有希望,因为不需要平行语料库。但是,他们的性能尚未得到满足,因此对有希望的解决方案的研究正在进行中。在本文中,我们提出了一种基于Q学习的新方法,并提供了基于编辑的摘要。该方法结合了两个关键模块,以形成编辑代理和语言模型转换器(EALM)。代理可以预测编辑操作(E.T.,删除,保留和替换),然后LM转换器确定性地基于操作信号生成摘要。 Q学习是用来训练代理商进行适当编辑操作的。实验结果表明,与以前的基于编码器码头的方法相比,EALM提供了竞争性能,即使具有真正的零配对数据(即没有验证集)。将任务定义为Q学习,不仅使我们能够开发一种竞争方法,而且还可以制造出可用于无监督的摘要中的最新技术。我们还进行了定性分析,为未来的无监督摘要研究提供了见解。
Unsupervised methods are promising for abstractive text summarization in that the parallel corpora is not required. However, their performance is still far from being satisfied, therefore research on promising solutions is on-going. In this paper, we propose a new approach based on Q-learning with an edit-based summarization. The method combines two key modules to form an Editorial Agent and Language Model converter (EALM). The agent predicts edit actions (e.t., delete, keep, and replace), and then the LM converter deterministically generates a summary on the basis of the action signals. Q-learning is leveraged to train the agent to produce proper edit actions. Experimental results show that EALM delivered competitive performance compared with the previous encoder-decoder-based methods, even with truly zero paired data (i.e., no validation set). Defining the task as Q-learning enables us not only to develop a competitive method but also to make the latest techniques in reinforcement learning available for unsupervised summarization. We also conduct qualitative analysis, providing insights into future study on unsupervised summarizers.