论文标题
视频问答回答的视频图形变压器
Video Graph Transformer for Video Question Answering
论文作者
论文摘要
本文提出了视频图形变压器(VGT)模型,用于视频Quetion Answering(VideoQA)。 VGT的唯一性是两个方面:1)它设计了一个动态图形变压器模块,该模块通过明确捕获视觉对象,它们的关系和动态来编码视频,以进行复杂的时空推理; 2)它利用了删除的视频和文本变压器,以比较视频和文本以执行质量检查,而不是纠缠的跨模式变压器进行答案分类。视觉文本通信是通过其他跨模式交互模块完成的。借助更合理的视频编码和质量检查解决方案,我们表明VGT可以在挑战动态关系推理的视频中取得更好的性能,而不是在没有预处理的情况下。它的性能甚至超过了那些被数百万个外部数据鉴定的模型。我们进一步表明,VGT也可以从自我监管的交叉模式预处理中受益匪浅,但数据的数量级较小。这些结果清楚地表明了VGT的有效性和优势,并揭示了其具有更高数据预处理的潜力。通过全面的分析和一些启发式观察,我们希望VGT可以在现实视频中促进VQA研究,超越粗略的认可/描述,以实现细粒度的关系推理。我们的代码可在https://github.com/sail-sg/vgt上找到。
This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT's uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal reasoning; and 2) it exploits disentangled video and text Transformers for relevance comparison between the video and text to perform QA, instead of entangled cross-modal Transformer for answer classification. Vision-text communication is done by additional cross-modal interaction modules. With more reasonable video encoding and QA solution, we show that VGT can achieve much better performances on VideoQA tasks that challenge dynamic relation reasoning than prior arts in the pretraining-free scenario. Its performances even surpass those models that are pretrained with millions of external data. We further show that VGT can also benefit a lot from self-supervised cross-modal pretraining, yet with orders of magnitude smaller data. These results clearly demonstrate the effectiveness and superiority of VGT, and reveal its potential for more data-efficient pretraining. With comprehensive analyses and some heuristic observations, we hope that VGT can promote VQA research beyond coarse recognition/description towards fine-grained relation reasoning in realistic videos. Our code is available at https://github.com/sail-sg/VGT.