论文标题

IART:在寻求信息的对话系统中,有力的响应对变压器的排名

IART: Intent-aware Response Ranking with Transformers in Information-seeking Conversation Systems

论文作者

Yang, Liu, Qiu, Minghui, Qu, Chen, Chen, Cen, Guo, Jiafeng, Zhang, Yongfeng, Croft, W. Bruce, Chen, Haiqing

论文摘要

诸如Apple Siri,Google Assistant,Amazon Alexa和Microsoft Cortana之类的个人助理系统正在越来越广泛地使用。了解用户意图,例如澄清问题,潜在的答案和信息寻求对话中的用户反馈对于检索良好的答案至关重要。在本文中,我们分析了信息寻求对话中的用户意图模式,并提出了一种意图感知的神经响应排名模型“ IRT”,该模型是指“与变形金刚的意图相关的排名”。 IART建立在用户意图建模和语言表示与变压器体系结构的整合之上,这完全依赖于自我发挥的机制而不是经常性网络。它结合了意图意识到的话语的关注,以在对话上下文中得出一个重要的加权方案,目的是更好地对话历史的理解。我们通过三个寻求信息的对话数据集(包括标准基准和商业数据)进行了广泛的实验。我们提出的模型优于各种指标的所有基线方法。我们还对学习用户意图进行案例研究和分析,及其对信息寻求对话中的响应排名的影响,以提供结果的解释。

Personal assistant systems, such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana, are becoming ever more widely used. Understanding user intent such as clarification questions, potential answers and user feedback in information-seeking conversations is critical for retrieving good responses. In this paper, we analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART", which refers to "Intent-Aware Ranking with Transformers". IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture, which relies entirely on a self-attention mechanism instead of recurrent nets. It incorporates intent-aware utterance attention to derive an importance weighting scheme of utterances in conversation context with the aim of better conversation history understanding. We conduct extensive experiments with three information-seeking conversation data sets including both standard benchmarks and commercial data. Our proposed model outperforms all baseline methods with respect to a variety of metrics. We also perform case studies and analysis of learned user intent and its impact on response ranking in information-seeking conversations to provide interpretation of results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源