论文标题
使用预训练的变压器模型的数据增强
Data Augmentation using Pre-trained Transformer Models
论文作者
论文摘要
基于语言模型的预训练模型(例如BERT)在不同的NLP任务中提供了巨大的收益。在本文中,我们研究了不同类型的基于变压器的预训练模型,例如自动回归模型(GPT-2),自动编码器模型(BERT)和SEQ2SEQ模型(BART),以进行有条件的数据增强。我们表明,将类标签预先到文本序列提供了一种简单而有效的方法来调节预训练的数据增强模型。此外,在三个分类基准上,预先训练的SEQ2SEQ模型在低资源设置中优于其他数据增强方法。此外,我们探讨了不同的基于预训练的基于模型的数据增强如何不同的数据多样性以及此类方法保留类标签信息的很好。
Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of transformer based pre-trained models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. Additionally, on three classification benchmarks, pre-trained Seq2Seq model outperforms other data augmentation methods in a low-resource setting. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.