论文标题
基于跨语言跨度预测的监督单词对齐方法使用多语言bert
A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT
论文作者
论文摘要
我们提出了一种基于跨语言跨度预测的新型监督单词对准方法。我们首先将单词对齐问题形式化为一个独立预测的集合,从源句子中的令牌到目标句子中的跨度。由于这相当于Squad V2.0样式问题回答任务,因此我们通过使用多语言BERT来解决此问题,该BERT在手动创建的金字对齐数据上进行了微调。我们通过将令牌的上下文添加到问题中,从而大大提高了对齐的准确性。在实验中,在中国,日语,德语,罗马尼亚语,法语和英语中使用五个单词的对齐数据集中,我们表明该建议的方法显着超过了以前的监督和无监督的单词对准方法,而无需使用任何bitexts进行预处理。例如,对于中文 - 英语数据,我们的F1得分为86.7,比以前的最先进的监督方法高13.3分。
We present a novel supervised word alignment method based on cross-language span prediction. We first formalize a word alignment problem as a collection of independent predictions from a token in the source sentence to a span in the target sentence. As this is equivalent to a SQuAD v2.0 style question answering task, we then solve this problem by using multilingual BERT, which is fine-tuned on a manually created gold word alignment data. We greatly improved the word alignment accuracy by adding the context of the token to the question. In the experiments using five word alignment datasets among Chinese, Japanese, German, Romanian, French, and English, we show that the proposed method significantly outperformed previous supervised and unsupervised word alignment methods without using any bitexts for pretraining. For example, we achieved an F1 score of 86.7 for the Chinese-English data, which is 13.3 points higher than the previous state-of-the-art supervised methods.