论文标题

多模式视频分类的跨模式学习

Cross-modal Learning for Multi-modal Video Categorization

论文作者

Goyal, Palash, Sahu, Saurabh, Ghosh, Shalini, Lee, Chul

论文摘要

多模式机器学习(ML)模型可以以多种方式(例如,视频,音频,文本)处理数据,对于各种问题的视频内容分析(例如,对象检测,场景理解,活动识别)非常有用。在本文中,我们专注于使用多模式ML技术的视频分类问题。特别是,我们开发了一种新型的多模式ML方法,我们称为“跨模式学习”,其中一种模态会影响另一种模式,但只有在模态之间存在相关性时,我们首先训练了一个相关塔,该塔是指导该模型中主要多模式视频分类高塔。我们展示了如何将这种跨模式原理应用于不同类型的模型(例如RNN,Transformer,NetVlad),并通过实验演示我们提出的具有跨模式学习的多模式的多模式视频分类模型如何使用跨模式学习超出表现强大的最先进的基线模型。

Multi-modal machine learning (ML) models can process data in multiple modalities (e.g., video, audio, text) and are useful for video content analysis in a variety of problems (e.g., object detection, scene understanding, activity recognition). In this paper, we focus on the problem of video categorization using a multi-modal ML technique. In particular, we have developed a novel multi-modal ML approach that we call "cross-modal learning", where one modality influences another but only when there is correlation between the modalities -- for that, we first train a correlation tower that guides the main multi-modal video categorization tower in the model. We show how this cross-modal principle can be applied to different types of models (e.g., RNN, Transformer, NetVLAD), and demonstrate through experiments how our proposed multi-modal video categorization models with cross-modal learning out-perform strong state-of-the-art baseline models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源