论文标题
使用元语义正常化程序的介入对比度学习
Interventional Contrastive Learning with Meta Semantic Regularizer
论文作者
论文摘要
基于对比度学习(CL)以成对的方式学习视觉表示。尽管流行的CL模型取得了长足的进步,但在本文中,我们发现了一种不断忽视的现象:当CL模型接受完整图像训练时,以完整图像测试的性能要比前景区域的表现更好。当CL模型通过前景区域训练时,以完整图像测试的性能要比前景区域差。该观察结果表明,图像中的背景可能会干扰模型学习语义信息,其影响尚未完全消除。为了解决这个问题,我们建立了一个结构性因果模型(SCM),以建模背景作为混杂因素。我们提出了一种基于后门调整的正则化方法,即用元语义正常器(ICL-MSR)进行介入的对比度学习,以对所提出的SCM进行因果干预。可以将ICL-MSR纳入任何现有的CL方法中,以减轻代表性学习的背景干扰。从理论上讲,我们证明ICL-MSR达到了更严格的误差。从经验上讲,我们在多个基准数据集上的实验表明,ICL-MSR能够改善不同最先进的CL方法的性能。
Contrastive learning (CL)-based self-supervised learning models learn visual representations in a pairwise manner. Although the prevailing CL model has achieved great progress, in this paper, we uncover an ever-overlooked phenomenon: When the CL model is trained with full images, the performance tested in full images is better than that in foreground areas; when the CL model is trained with foreground areas, the performance tested in full images is worse than that in foreground areas. This observation reveals that backgrounds in images may interfere with the model learning semantic information and their influence has not been fully eliminated. To tackle this issue, we build a Structural Causal Model (SCM) to model the background as a confounder. We propose a backdoor adjustment-based regularization method, namely Interventional Contrastive Learning with Meta Semantic Regularizer (ICL-MSR), to perform causal intervention towards the proposed SCM. ICL-MSR can be incorporated into any existing CL methods to alleviate background distractions from representation learning. Theoretically, we prove that ICL-MSR achieves a tighter error bound. Empirically, our experiments on multiple benchmark datasets demonstrate that ICL-MSR is able to improve the performances of different state-of-the-art CL methods.