论文标题
端到端重叠语音识别的序列化输出培训
Serialized Output Training for End-to-End Overlapped Speech Recognition
论文作者
论文摘要
本文提出了序列化输出培训(SOT),这是一种基于基于注意的编码器解码器方法的多扬声器重叠语音识别的新型框架。 SOT不使用多个输出层与置换不变训练(PIT)那样使用一个模型,该模型仅一个一个输出层,该模型接一个地生成了多个扬声器的转录。注意力和解码器模块会从重叠的语音中产生多个转录。 SOT在坑中具有两个优点:(1)最大扬声器数量不限制,(2)能够对不同扬声器的输出之间的依赖关系进行建模。我们还提出了一个简单的技巧,可以使用组成源说话的开始时间,允许SOT在$ O(s)$中执行$ s $是培训样本中的扬声器数量。关于Librispeech语料库的实验结果表明,SOT模型可以将重叠的语音转录,而可变的扬声器数量明显优于基于PIT的模型。我们还表明,SOT模型可以准确计算输入音频中的扬声器数量。
This paper proposes serialized output training (SOT), a novel framework for multi-speaker overlapped speech recognition based on an attention-based encoder-decoder approach. Instead of having multiple output layers as with the permutation invariant training (PIT), SOT uses a model with only one output layer that generates the transcriptions of multiple speakers one after another. The attention and decoder modules take care of producing multiple transcriptions from overlapped speech. SOT has two advantages over PIT: (1) no limitation in the maximum number of speakers, and (2) an ability to model the dependencies among outputs for different speakers. We also propose a simple trick that allows SOT to be executed in $O(S)$, where $S$ is the number of the speakers in the training sample, by using the start times of the constituent source utterances. Experimental results on LibriSpeech corpus show that the SOT models can transcribe overlapped speech with variable numbers of speakers significantly better than PIT-based models. We also show that the SOT models can accurately count the number of speakers in the input audio.