论文标题
我们应该硬编码复发概念还是学习它?探索视听语音识别的变压器体系结构
Should we hard-code the recurrence concept or learn it instead ? Exploring the Transformer architecture for Audio-Visual Speech Recognition
论文作者
论文摘要
视听语音融合策略AV ALIGN对充满挑战的LRS2数据集的视听语音识别(AVSR)的性能得到了显着改善。除了听觉外,还利用语音的视觉方式时,绩效改进在7%至30%之间。这项工作提出了AV对齐的一种变体,其中循环长的短期内存(LSTM)计算块被最近提出的变压器块所取代。我们比较了这两种方法,更详细地讨论了它们的优势和缺点。我们发现,变形金刚还学习了跨模式单调的对准,但是遇到了与LSTM模型相同的视觉收敛问题,呼吁对机器学习中的主要模态问题进行更深入的研究。
The audio-visual speech fusion strategy AV Align has shown significant performance improvements in audio-visual speech recognition (AVSR) on the challenging LRS2 dataset. Performance improvements range between 7% and 30% depending on the noise level when leveraging the visual modality of speech in addition to the auditory one. This work presents a variant of AV Align where the recurrent Long Short-term Memory (LSTM) computation block is replaced by the more recently proposed Transformer block. We compare the two methods, discussing in greater detail their strengths and weaknesses. We find that Transformers also learn cross-modal monotonic alignments, but suffer from the same visual convergence problems as the LSTM model, calling for a deeper investigation into the dominant modality problem in machine learning.