论文标题

CrossModal-3600:一个大量的多语言多模式评估数据集

Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset

论文作者

Thapliyal, Ashish V., Pont-Tuset, Jordi, Chen, Xi, Soricut, Radu

论文摘要

缺乏高质量的评估数据集对大规模多语言图像字幕的研究受到了严重阻碍。在本文中,我们介绍了CrossModal-3600数据集(简而言之XM3600),这是一组3600张图像,其中包含36种语言的人类生成的参考标题。这些图像是从世界各地选择的,涵盖了说36种语言的区域,并注释了标题,这些字幕在所有语言的样式方面都达到一致性,同时避免由于直接翻译而导致注释伪像。我们将此基准应用于大量多语言图像字幕模型的模型选择,并在使用XM3600作为自动指标的黄金参考时,显示出与人类评估相关结果的较高相关结果。

Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically diverse set of 3600 images annotated with human-generated reference captions in 36 languages. The images were selected from across the world, covering regions where the 36 languages are spoken, and annotated with captions that achieve consistency in terms of style across all languages, while avoiding annotation artifacts due to direct translation. We apply this benchmark to model selection for massively multilingual image captioning models, and show superior correlation results with human evaluations when using XM3600 as golden references for automatic metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源