论文标题
注意差距:了解多模式对比表示学习中的方式差距
Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
论文作者
论文摘要
我们提出了模态差距,这是多模型模型表示空间的有趣几何现象。具体而言,我们表明,不同的数据模式(例如,图像和文本)在多模式模型(例如剪辑)中以其共享表示形式嵌入。我们的系统分析表明,这一差距是由模型初始化和对比度学习优化的组合引起的。在模型初始化中,我们从经验和理论上表明,共同的深神经网络的表示仅限于狭窄的锥体。结果,在具有两个编码器的多模式模型中,当模型初始化时,这两种模式的表示显然是分开的。在优化期间,对比度学习使不同的方式保持一定距离,这受损耗函数中温度参数的影响。我们的实验进一步表明,改变模态差距距离在改善模型的下游零击分类性能和公平性方面具有重大影响。我们的代码和数据可在https://modalitygap.readthedocs.io/上获得。
We present modality gap, an intriguing geometric phenomenon of the representation space of multi-modal models. Specifically, we show that different data modalities (e.g. images and text) are embedded at arm's length in their shared representation in multi-modal models such as CLIP. Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization. In model initialization, we show empirically and theoretically that the representation of a common deep neural network is restricted to a narrow cone. As a consequence, in a multi-modal model with two encoders, the representations of the two modalities are clearly apart when the model is initialized. During optimization, contrastive learning keeps the different modalities separate by a certain distance, which is influenced by the temperature parameter in the loss function. Our experiments further demonstrate that varying the modality gap distance has a significant impact in improving the model's downstream zero-shot classification performance and fairness. Our code and data are available at https://modalitygap.readthedocs.io/