论文标题
基于变压器的多模式信息融合用于面部表达分析
Transformer-based Multimodal Information Fusion for Facial Expression Analysis
论文作者
论文摘要
人类情感行为分析在人类计算机相互作用(HCI)中引起了很多关注。在本文中,我们将其提交给CVPR 2022婚姻中的CVPR竞争(ABAW)。为了完全利用多种视图的情感知识,我们利用口语单词,语音韵律和面部表达的多模式特征,这些特征是从Aff-Wild2数据集中的视频剪辑中提取的。基于这些功能,我们提出了一个基于统一的变压器的多模式框架,用于动作单元检测和表达识别。具体而言,首先根据当前帧图像编码静态视觉功能。同时,我们通过滑动窗口将其相邻帧夹住,并从图像,音频和文本序列中提取三种多模式特征。然后,我们引入了一个基于变压器的融合模块,该模块集成了静态视觉特征和动态多模式特征。融合模块中的跨意义模块使输出集成功能集中在促进下游检测任务的关键部分上。我们还利用一些数据平衡技术,数据增强技术和后处理方法来进一步提高模型性能。在ABAW3竞赛的官方测试中,我们的模型在Expr和AU曲目中排名第一。广泛的定量评估以及对AFF-WILD2数据集的消融研究证明了我们提出的方法的有效性。
Human affective behavior analysis has received much attention in human-computer interaction (HCI). In this paper, we introduce our submission to the CVPR 2022 Competition on Affective Behavior Analysis in-the-wild (ABAW). To fully exploit affective knowledge from multiple views, we utilize the multimodal features of spoken words, speech prosody, and facial expression, which are extracted from the video clips in the Aff-Wild2 dataset. Based on these features, we propose a unified transformer-based multimodal framework for Action Unit detection and also expression recognition. Specifically, the static vision feature is first encoded from the current frame image. At the same time, we clip its adjacent frames by a sliding window and extract three kinds of multimodal features from the sequence of images, audio, and text. Then, we introduce a transformer-based fusion module that integrates the static vision features and the dynamic multimodal features. The cross-attention module in the fusion module makes the output integrated features focus on the crucial parts that facilitate the downstream detection tasks. We also leverage some data balancing techniques, data augmentation techniques, and postprocessing methods to further improve the model performance. In the official test of ABAW3 Competition, our model ranks first in the EXPR and AU tracks. The extensive quantitative evaluations, as well as ablation studies on the Aff-Wild2 dataset, prove the effectiveness of our proposed method.