论文标题

CYCDA:无监督的循环域的适应从图像到视频

CycDA: Unsupervised Cycle Domain Adaptation from Image to Video

论文作者

Lin, Wei, Kukleva, Anna, Sun, Kunyang, Possegger, Horst, Kuehne, Hilde, Bischof, Horst

论文摘要

尽管行动认可在近年来取得了令人印象深刻的结果,但视频培训数据的收集和注释仍然很耗时且成本密集。因此,已经提出了图像到视频改编,以利用无标签的Web图像源来适应未标记的目标视频。这提出了两个主要挑战:(1)Web图像和视频帧之间的空间域移动; (2)图像和视频数据之间的方式差距。为了应对这些挑战,我们提出了周期域的适应性(CYCDA),这是一种基于周期的方法,用于无监督的图像到视频域的适应,通过一方面利用图像和视频中的联合空间信息,另一方面,训练独立的时空模型以弥合模态差距。我们在每个周期中的两者之间的知识转移之间在空间和时空学习之间交替。我们在基准数据集上评估了图像到视频的方法,以及用于实现最新结果的混合源域的适应性,并证明了我们的循环适应性的好处。代码可在\ url {https://github.com/wlin-at/cycda}上找到。

Although action recognition has achieved impressive results over recent years, both collection and annotation of video training data are still time-consuming and cost intensive. Therefore, image-to-video adaptation has been proposed to exploit labeling-free web image source for adapting on unlabeled target videos. This poses two major challenges: (1) spatial domain shift between web images and video frames; (2) modality gap between image and video data. To address these challenges, we propose Cycle Domain Adaptation (CycDA), a cycle-based approach for unsupervised image-to-video domain adaptation by leveraging the joint spatial information in images and videos on the one hand and, on the other hand, training an independent spatio-temporal model to bridge the modality gap. We alternate between the spatial and spatio-temporal learning with knowledge transfer between the two in each cycle. We evaluate our approach on benchmark datasets for image-to-video as well as for mixed-source domain adaptation achieving state-of-the-art results and demonstrating the benefits of our cyclic adaptation. Code is available at \url{https://github.com/wlin-at/CycDA}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源