论文标题

评估交互式摘要:基于扩展的框架

Evaluating Interactive Summarization: an Expansion-Based Framework

论文作者

Shapira, Ori, Pasunuru, Ramakanth, Ronen, Hadar, Bansal, Mohit, Amsterdamer, Yael, Dagan, Ido

论文摘要

允许用户与多文件摘要互动是改善和自定义摘要结果的有希望的方向。以前的工作提出了有关互动摘要的不同想法,但是这些解决方案是高度分歧和无与伦比的。在本文中,我们为基于扩展的交互式摘要开发了一个端到端评估框架,该框架考虑了沿交互式会话的累积信息。我们的框架包括收集真实用户会议和依赖标准的评估措施的过程,但适应反映交互的过程。我们所有的解决方案旨在作为基准公开发布,从而比较互动摘要中未来的发展。我们通过评估和比较为此目的开发的基线实现来证明我们的框架的使用,该实现将作为我们的基准的一部分。我们对这些系统的广泛实验和分析激发了我们的设计选择,并支持框架的可行性。

Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results. Different ideas for interactive summarization have been proposed in previous work but these solutions are highly divergent and incomparable. In this paper, we develop an end-to-end evaluation framework for expansion-based interactive summarization, which considers the accumulating information along an interactive session. Our framework includes a procedure of collecting real user sessions and evaluation measures relying on standards, but adapted to reflect interaction. All of our solutions are intended to be released publicly as a benchmark, allowing comparison of future developments in interactive summarization. We demonstrate the use of our framework by evaluating and comparing baseline implementations that we developed for this purpose, which will serve as part of our benchmark. Our extensive experimentation and analysis of these systems motivate our design choices and support the viability of our framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源