论文标题
CSCL:无监督域适应的关键语义一致学习
CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain Adaptation
论文作者
论文摘要
无监督的域的适应性,而无需消耗注释过程的无标记目标数据,这引起了语义细分的吸引力。但是,1)现有的方法忽略了范围内的所有语义表示都是可转移的,这会使域的转移以无法转移的知识来削弱域的转移; 2)由于类别不合格的特征对齐,它们无法缩小类别分配的缩小。为了应对上述挑战,我们开发了一种新的关键语义一致学习(CSCL)模型,该模型减轻了域和类别分布的差异。具体而言,基于关键转移的对抗框架旨在突出可转移的域知识,同时忽略不可转移的知识。尽管发生了不可转移的知识的负转移,但可转移性 - 批判性指南可转移性定量器在强化学习方式下最大化正转移增益。同时,借助置信度引导的伪标签产生了目标样本的生成器,提出了对称的软差异损失,以探索阶层间的关系并促进类别分布的比对。几个数据集的实验证明了我们的模型的优势。
Unsupervised domain adaptation without consuming annotation process for unlabeled target data attracts appealing interests in semantic segmentation. However, 1) existing methods neglect that not all semantic representations across domains are transferable, which cripples domain-wise transfer with untransferable knowledge; 2) they fail to narrow category-wise distribution shift due to category-agnostic feature alignment. To address above challenges, we develop a new Critical Semantic-Consistent Learning (CSCL) model, which mitigates the discrepancy of both domain-wise and category-wise distributions. Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge. Transferability-critic guides transferability-quantizer to maximize positive transfer gain under reinforcement learning manner, although negative transfer of untransferable knowledge occurs. Meanwhile, with the help of confidence-guided pseudo labels generator of target samples, a symmetric soft divergence loss is presented to explore inter-class relationships and facilitate category-wise distribution alignment. Experiments on several datasets demonstrate the superiority of our model.