论文标题

改进了无监督域自适应重新ID的相互含义教学

Improved Mutual Mean-Teaching for Unsupervised Domain Adaptive Re-ID

论文作者

Ge, Yixiao, Yu, Shijie, Chen, Dapeng

论文摘要

在这份技术报告中,我们介绍了ECCV 2020年VISDA挑战赛的提交,并在排行榜上取得了表现最佳的结果之一。我们的解决方案基于结构化结构域的适应(SDA)和相互均值教学(MMT)框架。 SDA是一种基于域翻译的框架,致力于仔细地将源域图像转换为目标域。 MMT是一种基于伪标签的框架,致力于使用可靠的软标签进行伪标签炼油厂。具体来说,我们的培训管道中有三个主要步骤。 (i)我们采用SDA来生成源至目标翻译的图像,(ii)此类图像用作信息丰富的培训样本,以预先培训网络。 (iii)MMT在目标域上进一步微调了预训练的网络。请注意,我们设计了改进的MMT(称为MMT+),以通过对两个域中的样本间关系进行建模并维持实例歧视,从而进一步减轻标签噪声。我们提出的方法在地图方面达到了74.78%的精度,排名在153支球队中排名第二。

In this technical report, we present our submission to the VisDA Challenge in ECCV 2020 and we achieved one of the top-performing results on the leaderboard. Our solution is based on Structured Domain Adaptation (SDA) and Mutual Mean-Teaching (MMT) frameworks. SDA, a domain-translation-based framework, focuses on carefully translating the source-domain images to the target domain. MMT, a pseudo-label-based framework, focuses on conducting pseudo label refinery with robust soft labels. Specifically, there are three main steps in our training pipeline. (i) We adopt SDA to generate source-to-target translated images, and (ii) such images serve as informative training samples to pre-train the network. (iii) The pre-trained network is further fine-tuned by MMT on the target domain. Note that we design an improved MMT (dubbed MMT+) to further mitigate the label noise by modeling inter-sample relations across two domains and maintaining the instance discrimination. Our proposed method achieved 74.78% accuracies in terms of mAP, ranked the 2nd place out of 153 teams.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源