论文标题

通过对抗性级增强来学习可持续的关系提取的强大表示形式

Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation

论文作者

Wang, Peiyi, Song, Yifan, Liu, Tianyu, Lin, Binghuai, Cao, Yunbo, Li, Sujian, Sui, Zhifang

论文摘要

持续的关系提取(CRE)旨在从集体信息数据流中不断学习新关系。 CRE模型通常患有灾难性遗忘问题,即,当模型学习新关系时,旧关系的表现会严重降低。大多数以前的工作都将灾难性的遗忘遗忘,因为随着新的关系的到来,学到的表示形式的腐败,这是一个隐含的假设,即CRE模型已经充分学习了旧的关系。在本文中,通过实证研究,我们认为这一假设可能无法成立,而灾难性遗忘的一个重要原因是,在随后的学习过程中,学习的表征对与类似关系的外观没有良好的鲁棒性。为了解决这个问题,我们鼓励该模型通过简单而有效的对抗性阶级增强机制(ACA)学习更精确和健壮的表示,该机制易于实施和模型 - 不合时宜。实验结果表明,ACA可以始终如一地提高两个流行基准上最先进的CRE模型的性能。

Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic. Experimental results show that ACA can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源