论文标题

学习学习域概括的域不变参数

Learning to Learn Domain-invariant Parameters for Domain Generalization

论文作者

Hou, Feng, Zhang, Yao, Liu, Yang, Yuan, Jin, Zhong, Cheng, Zhang, Yang, Shi, Zhongchao, Fan, Jianping, He, Zhiqiang

论文摘要

由于域的转移,深度神经网络(DNN)通常无法在实践中未知的测试数据中概括。域的概括(DG)旨在通过从源域中捕获域不变表示来克服此问题。以洞察力的启发,即仅对DNN的部分参数进行了优化以提取域不变表示形式,我们期望一个通用模型能够充分感知并强调更新此类域 - 不变性参数。在本文中,我们提出了两个域去耦和组合(DDC)和域 - 不变性引导的反向传播(DIGB)的模块,这些模块可以鼓励这种通用模型专注于具有对比样品对之间具有统一优化方向的参数。我们对两个基准测试的广泛实验表明,我们提出的方法已经实现了最先进的性能,具有强大的概括能力。

Due to domain shift, deep neural networks (DNNs) usually fail to generalize well on unknown test data in practice. Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains. Motivated by the insight that only partial parameters of DNNs are optimized to extract domain-invariant representations, we expect a general model that is capable of well perceiving and emphatically updating such domain-invariant parameters. In this paper, we propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB), which can encourage such general model to focus on the parameters that have a unified optimization direction between pairs of contrastive samples. Our extensive experiments on two benchmarks have demonstrated that our proposed method has achieved state-of-the-art performance with strong generalization capability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源