论文标题

跨域情感分类的图形自适应语义转移

Graph Adaptive Semantic Transfer for Cross-domain Sentiment Classification

论文作者

Zhang, Kai, Liu, Qi, Huang, Zhenya, Cheng, Mingyue, Zhang, Kun, Zhang, Mengdi, Wu, Wei, Chen, Enhong

论文摘要

跨域情感分类(CDSC)旨在使用从源域中学到的可转移语义来预测未标记的目标域中评论的情感。在此任务中的现有研究更加注意句子的序列建模,同时在很大程度上忽略了嵌入在图形结构中的丰富域,不变的语义(即,言论标签和依赖关系部分)。作为探索语言理解特征的重要方面,自适应图表近年来起着至关重要的作用。为此,在本文中,我们旨在探讨从CDSC中类似图形结构的不变语义特征的可能性。具体而言,我们提出了图形自适应语义传递(GAST)模型,这是一种自适应句法嵌入方法,能够从单词序列和句法图中学习域不变语义。更具体地说,我们首先提出一个pos-transformer模块,以从单词序列以及言论一部分标签中提取顺序的语义特征。然后,我们设计一个混合图(HGAT)模块来通过考虑可转移的依赖关系来生成基于语法的语义特征。最后,我们设计了一种集成的自适应策略(IDS)来指导两个模块的联合学习过程。在四个公共数据集上进行的广泛实验表明,GAST与一系列最新模型具有可比的有效性。

Cross-domain sentiment classification (CDSC) aims to use the transferable semantics learned from the source domain to predict the sentiment of reviews in the unlabeled target domain. Existing studies in this task attach more attention to the sequence modeling of sentences while largely ignoring the rich domain-invariant semantics embedded in graph structures (i.e., the part-of-speech tags and dependency relations). As an important aspect of exploring characteristics of language comprehension, adaptive graph representations have played an essential role in recent years. To this end, in the paper, we aim to explore the possibility of learning invariant semantic features from graph-like structures in CDSC. Specifically, we present Graph Adaptive Semantic Transfer (GAST) model, an adaptive syntactic graph embedding method that is able to learn domain-invariant semantics from both word sequences and syntactic graphs. More specifically, we first raise a POS-Transformer module to extract sequential semantic features from the word sequences as well as the part-of-speech tags. Then, we design a Hybrid Graph Attention (HGAT) module to generate syntax-based semantic features by considering the transferable dependency relations. Finally, we devise an Integrated aDaptive Strategy (IDS) to guide the joint learning process of both modules. Extensive experiments on four public datasets indicate that GAST achieves comparable effectiveness to a range of state-of-the-art models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源