论文标题

St-Bert:跨模式模型端到端口语理解的预训练

ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken Language Understanding

论文作者

Kim, Minjeong, Kim, Gyuwan, Lee, Sang-Woo, Ha, Jung-Woo

论文摘要

语言模型预培训在各种下游任务中显示出令人鼓舞的结果。在这种情况下,我们介绍了一个跨模式的预训练的语言模型,称为语音文本伯特(St-bert),以解决端到端的口语理解(E2E SLU)任务。 St-Bert以我们的两个建议的跨模式对齐方式以我们的两个提议的预训练任务来学习语音化的跨模式对齐:跨模式掩盖语言建模(CM-MLM)和跨模式条件的语言建模(CM-CLM)。三个基准的实验结果表明,我们的方法对各种SLU数据集有效,并且即使有1%的训练数据可用,也显示出令人惊讶的边际性能降解。同样,我们的方法通过域特定的语音文本对数据显示了通过域自适应预训练进一步的SLU性能增益。

Language model pre-training has shown promising results in various downstream tasks. In this context, we introduce a cross-modal pre-trained language model, called Speech-Text BERT (ST-BERT), to tackle end-to-end spoken language understanding (E2E SLU) tasks. Taking phoneme posterior and subword-level text as an input, ST-BERT learns a contextualized cross-modal alignment via our two proposed pre-training tasks: Cross-modal Masked Language Modeling (CM-MLM) and Cross-modal Conditioned Language Modeling (CM-CLM). Experimental results on three benchmarks present that our approach is effective for various SLU datasets and shows a surprisingly marginal performance degradation even when 1% of the training data are available. Also, our method shows further SLU performance gain via domain-adaptive pre-training with domain-specific speech-text pair data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源