论文标题

具有本地和全局上下文的紧凑端到端模型,用于口语标识

A Compact End-to-End Model with Local and Global Context for Spoken Language Identification

论文作者

Jia, Fei, Koluguri, Nithin Rao, Balam, Jagadeesh, Ginsburg, Boris

论文摘要

我们介绍了基于ContextNet架构的泰坦内特 - Lid,这是一种用于口语识别(LID)的紧凑端对端到端神经网络。 Titanet-Lid采用1D深度可分离的卷积和挤压和激发层,以有效地捕捉话语中的本地和全球环境。尽管尺寸很小,但Titanet-Lid的性能与Voxlingua107数据集上的最先进模型相似,而小于10倍。此外,它可以通过简单的微调轻松地适应新的声学条件和看不见的语言,从而在Fleurs基准中实现了88.2%的最新精度。我们的模型是可扩展的,可以在准确性和速度之间实现更好的权衡。 Titanet-lid在长度小于5s的短语中的表现也很好,表明其对输入长度的稳健性。

We introduce TitaNet-LID, a compact end-to-end neural network for Spoken Language Identification (LID) that is based on the ContextNet architecture. TitaNet-LID employs 1D depth-wise separable convolutions and Squeeze-and-Excitation layers to effectively capture local and global context within an utterance. Despite its small size, TitaNet-LID achieves performance similar to state-of-the-art models on the VoxLingua107 dataset while being 10 times smaller. Furthermore, it can be easily adapted to new acoustic conditions and unseen languages through simple fine-tuning, achieving a state-of-the-art accuracy of 88.2% on the FLEURS benchmark. Our model is scalable and can achieve a better trade-off between accuracy and speed. TitaNet-LID performs well even on short utterances less than 5s in length, indicating its robustness to input length.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源