论文标题
端到端密集的视频字幕作为序列世代
End-to-end Dense Video Captioning as Sequence Generation
论文作者
论文摘要
密集的视频字幕旨在确定输入视频中感兴趣的事件,并为每个事件生成描述性字幕。以前的方法通常遵循两个阶段的生成过程,该过程首先提出了每个事件的段,然后为每个已确定的细分市场提供标题。大规模序列产生预处理的最新进展在统一各种任务的任务制定方面取得了巨大的成功,但是到目前为止,更复杂的任务(例如密集的视频字幕)无法完全利用这种强大的范式。在这项工作中,我们展示了如何将密集视频字幕的两个子任务与一个序列生成任务建模,并同时预测事件和相应的描述。 YouCook2和Vitt的实验表现出令人鼓舞的结果,并表明训练复杂任务的可行性,例如集成到大规模预处理模型中的端到端密集的视频字幕。
Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event. Previous approaches usually follow a two-stage generative process, which first proposes a segment for each event, then renders a caption for each identified segment. Recent advances in large-scale sequence generation pretraining have seen great success in unifying task formulation for a great variety of tasks, but so far, more complex tasks such as dense video captioning are not able to fully utilize this powerful paradigm. In this work, we show how to model the two subtasks of dense video captioning jointly as one sequence generation task, and simultaneously predict the events and the corresponding descriptions. Experiments on YouCook2 and ViTT show encouraging results and indicate the feasibility of training complex tasks such as end-to-end dense video captioning integrated into large-scale pretrained models.