论文标题

Melhubert:MEL频谱图上简化的Hubert

MelHuBERT: A simplified HuBERT on Mel spectrograms

论文作者

Lin, Tzu-Quan, Lee, Hung-yi, Tang, Hao

论文摘要

自我监督的模型在学习可以推广到各种下游任务的语音表示方面取得了巨大的成功。但是,大多数自制模型都需要大量的计算和多个GPU才能训练,从而极大地阻碍了自我监督学习的发展。为了减少培训的计算,我们重新审视了Hubert的训练,Hubert是一个非常成功的自我监管模型。我们在多个阶段改进并简化了几个关键组件,包括损耗函数,输入表示和培训。我们的模型Melhubert能够在电话识别,说话者身份识别和对Hubert的自动语音识别方面取得有利的性能,同时节省了31.2%的培训时间,或同等的每一秒钟演讲33.5%。代码和预培训模型可在https://github.com/nervjack2/melhubert中找到。

Self-supervised models have had great success in learning speech representations that can generalize to various downstream tasks. However, most self-supervised models require a large amount of compute and multiple GPUs to train, significantly hampering the development of self-supervised learning. In an attempt to reduce the computation of training, we revisit the training of HuBERT, a highly successful self-supervised model. We improve and simplify several key components, including the loss function, input representation, and training in multiple stages. Our model, MelHuBERT, is able to achieve favorable performance on phone recognition, speaker identification, and automatic speech recognition against HuBERT, while saving 31.2% of the pre-training time, or equivalently 33.5% MACs per one second speech. The code and pre-trained models are available in https://github.com/nervjack2/MelHuBERT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源