论文标题

仔细研究域对抗训练中的平滑度

A Closer Look at Smoothness in Domain Adversarial Training

论文作者

Rangwani, Harsh, Aithal, Sumukh K, Mishra, Mayank, Jain, Arihant, Babu, R. Venkatesh

论文摘要

域对抗训练无处不在地实现不变表示,并广泛用于各种域适应任务。近来,融合到平滑最佳的方法显示了对分类等监督学习任务的概括。在这项工作中,我们分析了增强配方对域对抗训练的影响,其目的是任务损失(例如分类,回归等)和对抗性术语的组合。我们发现,与(W.R.T.)任务损失相对于(W.R.T.)的平滑最小值稳定了对抗性训练,从而使目标域上的性能更好。与任务损失相反,我们的分析表明,融合到平滑的最小W.R.T.对抗损失导致目标结构域的次级概括。基于分析,我们介绍了平滑的域对抗训练(SDAT)程序,从而有效地增强了现有域对抗方法的性能,以进行分类和对象检测任务。我们的分析还提供了对社区中亚当(Adam)对域名对抗训练的广泛使用的深入了解。

Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times, methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of task loss (eg. classification, regression, etc.) and adversarial terms. We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain. In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain. Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our analysis also provides insight into the extensive usage of SGD over Adam in the community for domain adversarial training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源