论文标题

无监督域适应的联合对比度学习

Joint Contrastive Learning for Unsupervised Domain Adaptation

论文作者

Park, Changhwa, Lee, Jonghyun, Yoo, Jaeyoon, Hur, Minhoe, Yoon, Sungroh

论文摘要

通过匹配边缘分布来增强特征可传递性已导致域适应性的改善,尽管这是以特征歧视为代价的。特别是,验证误差上限的理想关节假设误差以前被认为是微小的,这是显着的,从而损害了其理论保证。在本文中,我们提出了目标误差的替代上限,该误差明确考虑了关节误差以使其更易于管理。通过理论分析,我们提出一个结合源域和目标域的联合优化框架。此外,我们引入了关节对比学习(JCL)来找到类级别的判别特征,这对于最大程度地减少关节误差至关重要。 JCL采用坚实的理论框架,采用对比损失来最大化特征及其标签之间的相互信息,这相当于最大程度地提高条件分布之间的Jensen-Shannon差异。两个现实世界数据集的实验表明,JCL的表现优于最新方法。

Enhancing feature transferability by matching marginal distributions has led to improvements in domain adaptation, although this is at the expense of feature discrimination. In particular, the ideal joint hypothesis error in the target error upper bound, which was previously considered to be minute, has been found to be significant, impairing its theoretical guarantee. In this paper, we propose an alternative upper bound on the target error that explicitly considers the joint error to render it more manageable. With the theoretical analysis, we suggest a joint optimization framework that combines the source and target domains. Further, we introduce Joint Contrastive Learning (JCL) to find class-level discriminative features, which is essential for minimizing the joint error. With a solid theoretical framework, JCL employs contrastive loss to maximize the mutual information between a feature and its label, which is equivalent to maximizing the Jensen-Shannon divergence between conditional distributions. Experiments on two real-world datasets demonstrate that JCL outperforms the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源