论文标题
无需平行图像和标题的无监督视觉和语言预训练
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions
论文作者
论文摘要
预先训练的上下文视觉和语言(V&L)模型在各种基准测试方面取得了令人印象深刻的性能。但是,现有模型需要大量的并行图像捕获数据以进行预训练。此类数据的收集成本很高,需要繁琐的策展。受到无监督的机器翻译的启发,我们调查了是否可以通过无监督的预训练而没有图像捕获语料库来学习强大的V&L表示模型。特别是,我们建议对``掩饰和预测''进行``面具和预测''对仅文本和仅图像的语料库进行培训,并介绍由对象识别模型检测到的对象标签作为锚指向桥梁的两种模式。我们发现,这样一种简单的方法可以在四个英语V&L基准上实现通过对齐数据预先训练的模型的性能。我们的工作挑战了广泛认为的观念,即对V&L预训练是必需的,同时大大减少了V&L模型所需的监督量。
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct ``mask-and-predict'' pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.