论文标题
多语言机器翻译的毒性大规模翻译
Toxicity in Multilingual Machine Translation at Scale
论文作者
论文摘要
机器翻译系统可以产生不同类型的错误,其中一些错误是由于它们对用户的特定负面影响而被视为关键或灾难性的。在本文中,我们专注于一种危险错误:添加毒性。我们在翻译大型评估数据集(整体数据,超过472K句子,涵盖13个人口轴)中评估和分析添加的毒性,从英语转换为164种语言。自动毒性评估表明,各种语言的毒性从0%到5%不等。毒性最大的输出语言往往是低资源的语言,而毒性最大的人口轴包括性取向,性别和性别以及能力。我们还在8个翻译方向的子集上进行人体评估,证实了真实添加毒性的流行。我们使用对翻译的源贡献的量度的测量,而低源贡献暗示幻觉来解释导致毒性的原因。利用输入归因使我们能够解释毒性,因为源贡献与研究的84%语言的毒性显着相关。鉴于我们的发现,我们降低添加毒性的建议是策划培训数据,以避免翻译,减轻幻觉和检查不稳定的翻译。
Machine Translation systems can produce different types of errors, some of which are characterized as critical or catastrophic due to the specific negative impact that they can have on users. In this paper we focus on one type of critical error: added toxicity. We evaluate and analyze added toxicity when translating a large evaluation dataset (HOLISTICBIAS, over 472k sentences, covering 13 demographic axes) from English into 164 languages. An automatic toxicity evaluation shows that added toxicity across languages varies from 0% to 5%. The output languages with the most added toxicity tend to be low-resource ones, and the demographic axes with the most added toxicity include sexual orientation, gender and sex, and ability. We also perform human evaluation on a subset of 8 translation directions, confirming the prevalence of true added toxicity. We use a measurement of the amount of source contribution to the translation, where a low source contribution implies hallucination, to interpret what causes toxicity. Making use of the input attributions allows us to explain toxicity, because the source contributions significantly correlate with toxicity for 84% of languages studied. Given our findings, our recommendations to reduce added toxicity are to curate training data to avoid mistranslations, mitigate hallucination and check unstable translations.