论文标题
为了生成长期文档的扩展摘要
On Generating Extended Summaries of Long Documents
论文作者
论文摘要
文档摘要中的先前工作主要集中于生成文档的简短摘要。尽管这种类型的摘要有助于获得给定文档的高级视图,但在某些情况下,需要了解有关其显着点的更多详细信息,这些信息无法符合简短的摘要。通常,诸如研究论文,法律文件或书籍之类的更长的文档就是这种情况。在本文中,我们提出了一种新的方法,用于生成长篇论文的扩展摘要。我们的方法利用文档的层次结构,并通过多任务学习方法将其纳入提取性摘要模型中。然后,我们在三个长摘要数据集(Arxiv-long,PubMed-Long和Longsumm)上介绍了结果。我们的方法的表现优于强度基线的性能。此外,我们对产生的结果进行了全面的分析,从而对长期摘要生成任务进行了对未来研究的见解。我们的分析表明,我们的多任务方法可以将提取概率分布调整为跨不同部分的摘要句子的青睐。我们的数据集和代码可在https://github.com/georgetown-ir-lab/extendendsump上公开获得
Prior work in document summarization has mainly focused on generating short summaries of a document. While this type of summary helps get a high-level view of a given document, it is desirable in some cases to know more detailed information about its salient points that can't fit in a short summary. This is typically the case for longer documents such as a research paper, legal document, or a book. In this paper, we present a new method for generating extended summaries of long papers. Our method exploits hierarchical structure of the documents and incorporates it into an extractive summarization model through a multi-task learning approach. We then present our results on three long summarization datasets, arXiv-Long, PubMed-Long, and Longsumm. Our method outperforms or matches the performance of strong baselines. Furthermore, we perform a comprehensive analysis over the generated results, shedding insights on future research for long-form summary generation task. Our analysis shows that our multi-tasking approach can adjust extraction probability distribution to the favor of summary-worthy sentences across diverse sections. Our datasets, and codes are publicly available at https://github.com/Georgetown-IR-Lab/ExtendedSumm