论文标题
语言模型不仅用于预训练:快速在线神经嘈杂频道建模
Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling
论文作者
论文摘要
大量未标记数据的训练模型已成为提高许多NLP任务准确性的有效方法。另一方面,传统的机器翻译通过嘈杂的频道建模来利用未标记的数据的历史悠久。最近,同样的想法已证明可以为神经机器翻译实现强大的改进。不幸的是,具有现代序列对序列模型的天真嘈杂的通道建模比替代方案慢得多。我们通过引入有效的近似值来解决这个问题,以便在嘈杂的频道方法中像强大的合奏一样快地推断,同时提高准确性。我们还表明,嘈杂的频道方法可以通过在WMT罗马尼亚 - 英语翻译上实现新的最新状态来超越强大的预训练结果。
Pre-training models on vast quantities of unlabeled data has emerged as an effective approach to improving accuracy on many NLP tasks. On the other hand, traditional machine translation has a long history of leveraging unlabeled data through noisy channel modeling. The same idea has recently been shown to achieve strong improvements for neural machine translation. Unfortunately, naïve noisy channel modeling with modern sequence to sequence models is up to an order of magnitude slower than alternatives. We address this issue by introducing efficient approximations to make inference with the noisy channel approach as fast as strong ensembles while increasing accuracy. We also show that the noisy channel approach can outperform strong pre-training results by achieving a new state of the art on WMT Romanian-English translation.