论文标题
对话处理的在线核心解决方案:改进有关实时对话的链接
Online Coreference Resolution for Dialogue Processing: Improving Mention-Linking on Real-Time Conversations
论文作者
论文摘要
本文提出了一个核心分辨率的方向,用于在线解码积极生成的输入(例如对话),该模型接受了话语及其过去的上下文,然后在每个对话时都会在当前的话语及其参与者中找到提及。针对这种新环境提出了一个基线和四个增量更新的模型,该模型是根据提及的链接范式改编的,该新设置解决了不同方面,其中包括单例,扬声器的编码和交叉转移提及上下文化。我们的方法在三个数据集上进行了评估:朋友,Ontonotes和Bolt。结果表明,每个方面都会稳定改进,我们的最佳模型优于基线超过10%,为此设置提供了有效的系统。进一步的分析突出了任务特征,例如解决提及召回的重要性。
This paper suggests a direction of coreference resolution for online decoding on actively generated input such as dialogue, where the model accepts an utterance and its past context, then finds mentions in the current utterance as well as their referents, upon each dialogue turn. A baseline and four incremental-updated models adapted from the mention-linking paradigm are proposed for this new setting, which address different aspects including the singletons, speaker-grounded encoding and cross-turn mention contextualization. Our approach is assessed on three datasets: Friends, OntoNotes, and BOLT. Results show that each aspect brings out steady improvement, and our best models outperform the baseline by over 10%, presenting an effective system for this setting. Further analysis highlights the task characteristics, such as the significance of addressing the mention recall.