论文标题

ERNIE-VIL:通过场景图增强了视觉语言表示

ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph

论文作者

Yu, Fei, Tang, Jiji, Yin, Weichong, Sun, Yu, Tian, Hao, Wu, Hua, Wang, Haifeng

论文摘要

我们提出了一种知识增强的方法Ernie-Vi​​l,该方法结合了从场景图获得的结构化知识,以学习视觉语言的联合表示。 Ernie-Vi​​l试图跨视觉和语言构建详细的语义连接(对象,对象的属性以及对象之间的关系),这对于视觉语言跨模式任务至关重要。利用视觉场景的场景图,Ernie-Vi​​l构造场景图表预测任务,即在训练阶段中的对象预测,属性预测和关系预测任务。具体而言,这些预测任务是通过预测从句子中解析的场景图中不同类型的节点来实现的。因此,厄尼·维尔(Ernie-Vi​​l)可以学习表征跨视觉和语言的详细语义对齐的联合表示。在大规模图形对齐数据集上进行预训练后,我们验证了Ernie-Vi​​l对5个跨模式下游任务的有效性。 Ernie-Vi​​l在所有这些任务上都取得了最新的表现,并将其排名为VCR排行榜上的第一名,绝对提高了3.7%。

We propose a knowledge-enhanced approach, ERNIE-ViL, which incorporates structured knowledge obtained from scene graphs to learn joint representations of vision-language. ERNIE-ViL tries to build the detailed semantic connections (objects, attributes of objects and relationships between objects) across vision and language, which are essential to vision-language cross-modal tasks. Utilizing scene graphs of visual scenes, ERNIE-ViL constructs Scene Graph Prediction tasks, i.e., Object Prediction, Attribute Prediction and Relationship Prediction tasks in the pre-training phase. Specifically, these prediction tasks are implemented by predicting nodes of different types in the scene graph parsed from the sentence. Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language. After pre-training on large scale image-text aligned datasets, we validate the effectiveness of ERNIE-ViL on 5 cross-modal downstream tasks. ERNIE-ViL achieves state-of-the-art performances on all these tasks and ranks the first place on the VCR leaderboard with an absolute improvement of 3.7%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源