论文标题

仅仅用于跨域情绪分析的对比度学习

Mere Contrastive Learning for Cross-Domain Sentiment Analysis

论文作者

Luo, Yun, Guo, Fang, Liu, Zihan, Zhang, Yue

论文摘要

跨域情感分析旨在使用在源域上训练的模型来预测目标域中文本的情绪,以应对标记数据的稀缺性。先前的研究主要是针对任务的基于跨透明的方法,这些方法受到不稳定性和泛化不良的方式。在本文中,我们探讨了有关跨域情感分析任务的对比度学习。我们提出了一个经过修改的对比目标,其中包括批处理否定样本,以便将同一类的句子表示将被推开,而来自不同类别的句子表示在潜在空间中进一步分开。在两个广泛使用的数据集上进行的实验表明,我们的模型可以在跨域和多域情感分析任务中实现最先进的性能。同时,可视化证明了在源域中学习的知识转移到目标域的有效性,并且对抗性测试验证了我们模型的鲁棒性。

Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data. Previous studies are mostly cross-entropy-based methods for the task, which suffer from instability and poor generalization. In this paper, we explore contrastive learning on the cross-domain sentiment analysis task. We propose a modified contrastive objective with in-batch negative samples so that the sentence representations from the same class will be pushed close while those from the different classes become further apart in the latent space. Experiments on two widely used datasets show that our model can achieve state-of-the-art performance in both cross-domain and multi-domain sentiment analysis tasks. Meanwhile, visualizations demonstrate the effectiveness of transferring knowledge learned in the source domain to the target domain and the adversarial test verifies the robustness of our model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源