论文标题

DCR-NET:联合对话行为认可和情感分类的深层共同关系网络

DCR-Net: A Deep Co-Interactive Relation Network for Joint Dialog Act Recognition and Sentiment Classification

论文作者

Qin, Libo, Che, Wanxiang, Li, Yangming, Ni, Minheng, Liu, Ting

论文摘要

在对话系统中,对话框ACT识别和情感分类是捕获说话者意图的两个相关任务,在该任务中,对话行为和情感可以分别表示明确和隐式意图。大多数现有系统要么将它们视为单独的任务,要么仅通过以隐式方式共享参数而共同对两个任务进行建模,而无需明确建模相互交互和关系。为了解决这个问题,我们提出了一个深层的共同关系网络(DCR-net),以明确考虑交叉局势,并通过引入共同疗法的关系层对两个任务之间的相互作用进行建模。另外,可以将所提出的关系层堆叠以逐渐通过相互作用的多个步骤捕获相互知识。特别是,我们彻底研究了不同的关系层及其影响。在两个公共数据集(Mastodon和DailyDialog)上的实验结果表明,我们的模型在对话框ACT识别任务上的最先进联合模型在F1得分方面的表现分别优于4.3%和3.4%,分别在情感分类上,分别超过了5.7%和12.4%。全面的分析从经验上验证了对两个任务与多步交互机制之间的关系进行明确建模的有效性。最后,我们在框架中采用了来自变压器(BERT)的双向编码器表示,这可以进一步提高我们在这两个任务中的性能。

In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately. Most of the existing systems either treat them as separate tasks or just jointly model the two tasks by sharing parameters in an implicit way without explicitly modeling mutual interaction and relation. To address this problem, we propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly consider the cross-impact and model the interaction between the two tasks by introducing a co-interactive relation layer. In addition, the proposed relation layer can be stacked to gradually capture mutual knowledge with multiple steps of interaction. Especially, we thoroughly study different relation layers and their effects. Experimental results on two public datasets (Mastodon and Dailydialog) show that our model outperforms the state-of-the-art joint model by 4.3% and 3.4% in terms of F1 score on dialog act recognition task, 5.7% and 12.4% on sentiment classification respectively. Comprehensive analysis empirically verifies the effectiveness of explicitly modeling the relation between the two tasks and the multi-steps interaction mechanism. Finally, we employ the Bidirectional Encoder Representation from Transformer (BERT) in our framework, which can further boost our performance in both tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源