论文标题

r $^2 $ -NET:句子语义匹配的关系学习网络的关系

R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic Matching

论文作者

Zhang, Kun, Wu, Le, Lv, Guangyi, Wang, Meng, Chen, Enhong, Ruan, Shulan

论文摘要

句子语义匹配是自然语言处理中的基本任务之一,它要求代理来确定输入句子之间的语义关系。最近,深层神经网络在这一领域,尤其是伯特(Bert)取得了令人印象深刻的表现。尽管这些模型具有有效性,但其中大多数将输出标签视为毫无意义的单次矢量,低估了这些标签所揭示的关系的语义信息和指导,尤其是对于具有少量标签的任务。为了解决这个问题,我们提出了有关句子语义匹配的关系学习网络(R2-NET)的关系。具体来说,我们首先采用BERT从全局角度来编码输入句子。然后,基于CNN的编码器旨在从本地角度捕获关键字和短语信息。为了充分利用标签以获得更好的关系信息提取,我们引入了一个自我监督的关系分类任务的关系,以指导R2-NET考虑有关标签的更多信息。同时,使用三胞胎损失来区分较细的粒度中的阶层和类间关系。在两个句子语义匹配任务上进行的经验实验证明了我们提出的模型的优越性。作为副产品,我们发布了代码,以促进其他研究。

Sentence semantic matching is one of the fundamental tasks in natural language processing, which requires an agent to determine the semantic relation among input sentences. Recently, deep neural networks have achieved impressive performance in this area, especially BERT. Despite the effectiveness of these models, most of them treat output labels as meaningless one-hot vectors, underestimating the semantic information and guidance of relations that these labels reveal, especially for tasks with a small number of labels. To address this problem, we propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching. Specifically, we first employ BERT to encode the input sentences from a global perspective. Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective. To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task for guiding R2-Net to consider more about labels. Meanwhile, a triplet loss is employed to distinguish the intra-class and inter-class relations in a finer granularity. Empirical experiments on two sentence semantic matching tasks demonstrate the superiority of our proposed model. As a byproduct, we have released the codes to facilitate other researches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源