论文标题

语音转换挑战2020的NU语音转换系统:序列到序列模型的有效性和自回归的神经声码器的有效性

The NU Voice Conversion System for the Voice Conversion Challenge 2020: On the Effectiveness of Sequence-to-sequence Models and Autoregressive Neural Vocoders

论文作者

Huang, Wen-Chin, Tobing, Patrick Lumban, Wu, Yi-Chiao, Kobayashi, Kazuhiro, Toda, Tomoki

论文摘要

在本文中,我们介绍了Nagoya University(NU)开发的语音转换系统,用于2020年语音转换挑战(VCC2020)。我们旨在确定VC中两种最近的重要技术的有效性:序列到序列(SEQ2SEQ)模型和自回归(AR)神经声码器。在挑战中为这两个任务开发了两个各自的系统:对于任务1,我们采用了基于变压器的SEQ2SEQ VC模型的Voice Transformer Network,并使用合成的并行数据扩展了它以解决非平行数据;对于任务2,我们使用了基于框架的循环变量自动编码器(Cyclevae)来对语音波形和AR WaveNet Vocoder的光谱特征进行建模,并具有其他微调。通过与基线系统进行比较,我们确认SEQ2SEQ建模可以提高转换相似性,并且使用AR声码器可以改善转换后的语音的自然性。

In this paper, we present the voice conversion (VC) systems developed at Nagoya University (NU) for the Voice Conversion Challenge 2020 (VCC2020). We aim to determine the effectiveness of two recent significant technologies in VC: sequence-to-sequence (seq2seq) models and autoregressive (AR) neural vocoders. Two respective systems were developed for the two tasks in the challenge: for task 1, we adopted the Voice Transformer Network, a Transformer-based seq2seq VC model, and extended it with synthetic parallel data to tackle nonparallel data; for task 2, we used the frame-based cyclic variational autoencoder (CycleVAE) to model the spectral features of a speech waveform and the AR WaveNet vocoder with additional fine-tuning. By comparing with the baseline systems, we confirmed that the seq2seq modeling can improve the conversion similarity and that the use of AR vocoders can improve the naturalness of the converted speech.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源