论文标题

Bridgetower:在视觉表示学习中编码器之间的桥梁

BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning

论文作者

Xu, Xiao, Wu, Chenfei, Rosenman, Shachar, Lal, Vasudev, Che, Wanxiang, Duan, Nan

论文摘要

近年来,具有两个较高架构的视觉语言(VL)模型主导了视觉表示的学习。当前的VL模型要么使用轻巧的Uni-Modal编码器,并在深层的跨模式编码中同时提取,对齐和融合两种模态,要么从深度预训练的Uni-Modal-Modal编码器中喂入最后一层Uni-Modal-Modal模式表示,并将其从深度训练的Uni-Modal模式编码器中添加到顶部的交叉模式编码器中。两种方法都可能限制视觉表示学习并限制模型性能。在本文中,我们提出了Bridgetower,该文章引入了多个桥层层,这些桥层层之间在Uni-Modal编码器的顶层和跨模式编码器的每一层之间建立了连接。这使得在跨模式编码器中不同语义级别的不同语义级别的视觉和文本表示之间有效自下而上的交叉模式对齐和融合。 Bridgetower仅通过4M图像进行了预训练,在各种下游视觉语言任务上实现了最先进的性能。特别是,在VQAV2 Test-STD集合中,Bridgetower的精度为78.73%,使用相同的预培训数据和几乎可以忽略不计的其他参数和计算成本,使先前的最新型号表现优于先前的最新型号。值得注意的是,当进一步扩展模型时,Bridgetower的精度为81.15%,超过了在较大数据集中预先训练的模型。代码和检查点可在https://github.com/microsoft/bridgetower上找到。

Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. Code and checkpoints are available at https://github.com/microsoft/BridgeTower.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源