论文标题
视频文本表示通过可区分的时间对齐方式学习
Video-Text Representation Learning via Differentiable Weak Temporal Alignment
论文作者
论文摘要
通过监督方法学习视频和文本的通用联合表示形式需要大量的手动注释视频数据集。作为一种实用的替代方法,最近引入了一个大型但未经修理和叙述的视频数据集Howto100m。但是,由于其模棱两可和不顺序的一致性,以自我监督的方式学习视频和文本的联合嵌入仍然具有挑战性。在本文中,我们提出了一种新型的多模式自我监督的框架视频文本在时间上弱对准的对比度学习(VT-TWIN),以使用动态时间扭曲(DTW)的变体从嘈杂和弱相关的数据中捕获重要信息(DTW)。我们观察到,标准DTW固有地无法处理弱相关的数据,而仅考虑全球最佳比对路径。为了解决这些问题,我们开发了一个可区分的DTW,该DTW也以较弱的时间对齐方式反映了本地信息。此外,我们提出的模型应用了对比度学习方案,以在弱相关数据上学习特征表示。我们的广泛实验表明,VT-Twins在多模式表示学习方面取得了重大改进,并且优于各种具有挑战性的下游任务。代码可在https://github.com/mlvlab/vt-twins上找到。
Learning generic joint representations for video and text by a supervised method requires a prohibitively substantial amount of manually annotated video datasets. As a practical alternative, a large-scale but uncurated and narrated video dataset, HowTo100M, has recently been introduced. But it is still challenging to learn joint embeddings of video and text in a self-supervised manner, due to its ambiguity and non-sequential alignment. In this paper, we propose a novel multi-modal self-supervised framework Video-Text Temporally Weak Alignment-based Contrastive Learning (VT-TWINS) to capture significant information from noisy and weakly correlated data using a variant of Dynamic Time Warping (DTW). We observe that the standard DTW inherently cannot handle weakly correlated data and only considers the globally optimal alignment path. To address these problems, we develop a differentiable DTW which also reflects local information with weak temporal alignment. Moreover, our proposed model applies a contrastive learning scheme to learn feature representations on weakly correlated data. Our extensive experiments demonstrate that VT-TWINS attains significant improvements in multi-modal representation learning and outperforms various challenging downstream tasks. Code is available at https://github.com/mlvlab/VT-TWINS.