论文标题
Charbert:角色意识到的预训练的语言模型
CharBERT: Character-aware Pre-trained Language Model
论文作者
论文摘要
大多数预训练的语言模型(PLMS)在子词级别上构建单词表示,并带有字节对编码(BPE)或其变化,几乎可以避免OOV(vocab)单词。但是,这些方法将一个单词分为子单词单元,并使表示形式不完整和脆弱。在本文中,我们提出了一种名为Charbert的角色吸引的预训练的语言模型,以改进以前的方法(例如Bert,Roberta)来解决这些问题。我们首先从顺序字符表示中构造每个令牌的上下文词嵌入,然后通过新型的异质交互模块融合字符的表示和子字表示。我们还为无监督的角色表示学习提出了一项名为NLM(NOISY LM)的新的预训练任务。我们在原始数据集和对抗性拼写测试集上评估了问题回答,序列标签和文本分类任务的方法。实验结果表明,我们的方法可以同时显着提高PLM的性能和鲁棒性。预验证的模型,评估集和代码可在https://github.com/wtma/charbert上找到
Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable. However, those methods split a word into subword units and make the representation incomplete and fragile. In this paper, we propose a character-aware pre-trained language model named CharBERT improving on the previous methods (such as BERT, RoBERTa) to tackle these problems. We first construct the contextual word embedding for each token from the sequential character representations, then fuse the representations of characters and the subword representations by a novel heterogeneous interaction module. We also propose a new pre-training task named NLM (Noisy LM) for unsupervised character representation learning. We evaluate our method on question answering, sequence labeling, and text classification tasks, both on the original datasets and adversarial misspelling test sets. The experimental results show that our method can significantly improve the performance and robustness of PLMs simultaneously. Pretrained models, evaluation sets, and code are available at https://github.com/wtma/CharBERT