论文标题

预训练的WAV2VEC 2.0如何在域移动ASR上执行?空中交通管制通信的广泛基准

How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications

论文作者

Zuluaga-Gomez, Juan, Prasad, Amrutha, Nigmatulina, Iuliia, Sarfjoo, Saeed, Motlicek, Petr, Kleinert, Matthias, Helmke, Hartmut, Ohneiser, Oliver, Zhan, Qingran

论文摘要

最新的关于自我监管的预训练的工作着重于利用大规模的未标记的语音数据来构建可靠的端到端(E2E)声学模型(AM),以后可以在下游任务上进行微调,例如自动语音识别(ASR)。然而,当数据属性在训练和微调阶段之间存在很大差异时,很少有作品研究对性能的影响,称为域移动。我们通过分析下游ASR上WAV2VEC 2.0和XLS-R模型的鲁棒性来针对这种情况,以实现完全看不见的域,空中交通控制(ATC)通信。我们将这两个模型基于几个开源和具有挑战性的ATC数据库,其信噪比在5到20 dB之间。与基于混合的ASR基准相比,仅通过微调E2E声学模型,具有较小标记的数据的相对单词错误率(WER)在20%至40%之间的降低。我们分析了一个ATC数据集带来的低资源场景和性别偏见。

Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源