论文标题

改善端到端神经模型的噪音稳健性,以自动语音识别

Improving Noise Robustness of an End-to-End Neural Model for Automatic Speech Recognition

论文作者

Balam, Jagadeesh, Huang, Jocelyn, Lavrukhin, Vitaly, Deng, Slyne, Majumdar, Somshubra, Ginsburg, Boris

论文摘要

我们介绍了使用密集数据增强的端到端自动语音识别(ASR)模型训练的实验。我们探讨了对预训练的模型来改善噪声稳健性的功效,并且我们发现这是训练各种嘈杂条件的一种非常有效的方法,尤其是在使用该模型的条件时,这是未知的。从训练清洁数据的模型开始,有助于在干净的语音上建立基线性能。我们仔细地调整了该模型,以维持清洁语音的性能,并在嘈杂条件下提高模型的准确性。有了这个模式,我们在大型公共语料库上训练了噪音英语和普通话ASR模型的强大培训。所有描述的模型和培训食谱均在Nemo(用于对话AI的工具包)中开源。

We present our experiments in training robust to noise an end-to-end automatic speech recognition (ASR) model using intensive data augmentation. We explore the efficacy of fine-tuning a pre-trained model to improve noise robustness, and we find it to be a very efficient way to train for various noisy conditions, especially when the conditions in which the model will be used, are unknown. Starting with a model trained on clean data helps establish baseline performance on clean speech. We carefully fine-tune this model to both maintain the performance on clean speech, and improve the model accuracy in noisy conditions. With this schema, we trained robust to noise English and Mandarin ASR models on large public corpora. All described models and training recipes are open sourced in NeMo, a toolkit for conversational AI.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源