论文标题

VideoDubber:带有语音感知长度控制视频配音的机器翻译

VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing

论文作者

Wu, Yihan, Guo, Junliang, Tan, Xu, Zhang, Chen, Li, Bohan, Song, Ruihua, He, Lei, Zhao, Sheng, Menezes, Arul, Bian, Jiang

论文摘要

视频配音旨在将电影或电视节目中的原始演讲转化为目标语言的演讲,可以通过级联的系统来实现,该系统包括语音识别,机器翻译和语音综合。为了确保翻译的语音与相应的视频很好地对齐,翻译的语音的长度/持续时间应与原始语音的原始语音尽可能近,这需要严格的长度控制。以前的作品通常会控制机器翻译模型生成的单词或字符的数量与源句子相似,而无需将语音的等法性视为不同语言中的单词/字符的语音持续时间各不相同。在本文中,我们提出了一个针对视频配音任务量身定制的机器翻译系统,该系统直接考虑了翻译中每个令牌的语音持续时间,以符合源和目标语音的长度。具体来说,我们通过指导每个单词的持续时间信息的预测来控制生成句子的语音长度,包括自身的语音持续时间以及其余单词剩下多少持续时间。我们在四个语言方向上设计实验(德语 - >英语,西班牙语 - >英语,中文<->英语),结果表明,所提出的方法比基线方法在生成的语音上实现了更好的长度控制能力。为了弥补缺乏现实世界数据集,我们还构建了一个从电影中收集的现实世界测试集,以提供有关视频配音任务的全面评估。

Video dubbing aims to translate the original speech in a film or television program into the speech in a target language, which can be achieved with a cascaded system consisting of speech recognition, machine translation and speech synthesis. To ensure the translated speech to be well aligned with the corresponding video, the length/duration of the translated speech should be as close as possible to that of the original speech, which requires strict length control. Previous works usually control the number of words or characters generated by the machine translation model to be similar to the source sentence, without considering the isochronicity of speech as the speech duration of words/characters in different languages varies. In this paper, we propose a machine translation system tailored for the task of video dubbing, which directly considers the speech duration of each token in translation, to match the length of source and target speech. Specifically, we control the speech length of generated sentence by guiding the prediction of each word with the duration information, including the speech duration of itself as well as how much duration is left for the remaining words. We design experiments on four language directions (German -> English, Spanish -> English, Chinese <-> English), and the results show that the proposed method achieves better length control ability on the generated speech than baseline methods. To make up the lack of real-world datasets, we also construct a real-world test set collected from films to provide comprehensive evaluations on the video dubbing task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源