论文标题

归一化扰动:现实世界域移动的简单域概括方法

Normalization Perturbation: A Simple Domain Generalization Method for Real-World Domain Shifts

论文作者

Fan, Qi, Segu, Mattia, Tai, Yu-Wing, Yu, Fisher, Tang, Chi-Keung, Schiele, Bernt, Dai, Dengxin

论文摘要

改善模型对域转移的概括性至关重要,尤其是对于诸如自动驾驶之类的安全至关重要的应用。由于环境变化和传感器噪音,现实世界的样式可能会有很大的不同,但是深层模型只知道训练域样式。这种域样式差距阻碍了模型对各种现实世界域上的概括。我们提出的标准化扰动(NP)可以有效地克服此领域样式过度拟合问题。我们观察到,这个问题主要是由于浅CNN层中学到的低级特征的偏差分布引起的。因此,我们建议在源域特征的渠道统计数据中扰动各种潜在样式,以便训练有素的深层模型可以感知各种潜在的域,即使没有在训练中观察到目标域数据的情况下,也可以很好地概括。我们进一步探索了对风格敏感的渠道的有效样式综合。归一化扰动仅依赖于单个源域,并且非常有效且极其易于实现。广泛的实验验证了我们方法在实际领域变化下概括模型的有效性。

Improving model's generalizability against domain shifts is crucial, especially for safety-critical applications such as autonomous driving. Real-world domain styles can vary substantially due to environment changes and sensor noises, but deep models only know the training domain style. Such domain style gap impedes model generalization on diverse real-world domains. Our proposed Normalization Perturbation (NP) can effectively overcome this domain style overfitting problem. We observe that this problem is mainly caused by the biased distribution of low-level features learned in shallow CNN layers. Thus, we propose to perturb the channel statistics of source domain features to synthesize various latent styles, so that the trained deep model can perceive diverse potential domains and generalizes well even without observations of target domain data in training. We further explore the style-sensitive channels for effective style synthesis. Normalization Perturbation only relies on a single source domain and is surprisingly effective and extremely easy to implement. Extensive experiments verify the effectiveness of our method for generalizing models under real-world domain shifts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源