论文标题
将WAV2VEC2.0应用于各种低资源语言的语音识别
Applying Wav2vec2.0 to Speech Recognition in Various Low-resource Languages
论文作者
论文摘要
有几个域拥有相应使用的特征提取器,例如Resnet,Bert和GPT-X。这些模型通常会通过自学措施对大量未标记的数据进行预训练,并且可以有效地应用于下游任务。在语音域中,WAV2VEC2.0开始显示其强大的表示能力和在属于有声读物域的Librispeech语料库上超低资源语音识别的可行性。但是,除英语以外的其他语言和语言中,尚未对WAV2VEC2.0进行检查。为了验证其对语言的普遍性,我们应用预培训的模型来解决各种口语语言的低资源语音识别任务。与以前的工作相比,我们在六种语言方面取得了超过20%的相对改进。在这些语言中,英语可获得52.4%的收益。此外,使用粗颗粒的建模单元(例如子字或角色)比细粒度的建模单元(例如电话或字母)取得更好的结果。
There are several domains that own corresponding widely used feature extractors, such as ResNet, BERT, and GPT-x. These models are usually pre-trained on large amounts of unlabeled data by self-supervision and can be effectively applied to downstream tasks. In the speech domain, wav2vec2.0 starts to show its powerful representation ability and feasibility of ultra-low resource speech recognition on the Librispeech corpus, which belongs to the audiobook domain. However, wav2vec2.0 has not been examined on real spoken scenarios and languages other than English. To verify its universality over languages, we apply pre-trained models to solve low-resource speech recognition tasks in various spoken languages. We achieve more than 20% relative improvements in six languages compared with previous work. Among these languages, English achieves a gain of 52.4%. Moreover, using coarse-grained modeling units, such as subword or character, achieves better results than fine-grained modeling units, such as phone or letter.