论文标题
探索微调数据对WAV2VEC 2.0模型的影响,以进行盲目的语音质量预测
Exploring the influence of fine-tuning data on wav2vec 2.0 model for blind speech quality prediction
论文作者
论文摘要
最近的研究表明,自我监督模型如何产生准确的语音质量预测。预先训练的WAV2VEC 2.0模型生成的语音表示允许使用少量注释数据构建可靠的预测模型。这打开了在稀缺标记数据的情况下开发强大模型的可能性。众所周知,微调可以改善模型的性能。但是,目前尚不清楚用于微调的数据(例如,语言,样本数量)如何影响该性能。在本文中,我们探讨了如何使用不同的语音语料库微调WAV2VEC 2.0可以影响其性能。我们取了四个语音数据集,其中包含在常见会议应用程序中发现的降解和微调的WAV2VEC 2.0针对不同语言和数据尺寸方案。对所有四个会议数据集以及一个包含合成语音的附加数据集进行了微调模型,并将其与三个外部基线模型进行了比较。结果表明,微调模型能够与基线模型竞争。更大的微调数据可确保更好的性能;同时,语言的多样性有助于模型处理特定的语言。需要进一步的研究来评估通过多语言数据集预训练的其他WAV2VEC 2.0模型,并开发对语言多样性更具弹性的预测模型。
Recent studies have shown how self-supervised models can produce accurate speech quality predictions. Speech representations generated by the pre-trained wav2vec 2.0 model allows constructing robust predicting models using small amounts of annotated data. This opens the possibility of developing strong models in scenarios where labelled data is scarce. It is known that fine-tuning improves the model's performance; however, it is unclear how the data (e.g., language, amount of samples) used for fine-tuning is influencing that performance. In this paper, we explore how using different speech corpus to fine-tune the wav2vec 2.0 can influence its performance. We took four speech datasets containing degradations found in common conferencing applications and fine-tuned wav2vec 2.0 targeting different languages and data size scenarios. The fine-tuned models were tested across all four conferencing datasets plus an additional dataset containing synthetic speech and they were compared against three external baseline models. Results showed that fine-tuned models were able to compete with baseline models. Larger fine-tune data guarantee better performance; meanwhile, diversity in language helped the models deal with specific languages. Further research is needed to evaluate other wav2vec 2.0 models pre-trained with multi-lingual datasets and to develop prediction models that are more resilient to language diversity.