论文标题
HTNET:无锚的时间动作与分层变压器定位
HTNet: Anchor-free Temporal Action Localization with Hierarchical Transformers
论文作者
论文摘要
时间动作本地化(TAL)是识别视频中一组动作的任务,该任务涉及将开始和最终框架定位并对每个操作实例进行分类。现有方法通过使用预定义的锚窗或启发式自下而上的边界匹配策略来解决此任务,这些策略是推理时间的主要瓶颈。此外,主要的挑战是由于缺乏全球上下文信息而无法捕获远程动作。在本文中,我们提出了一个无锚的框架,称为HTNET,该框架可预测一组基于变压器体系结构的视频的开始时间,结束时间,类>三胞胎。在预测粗边界之后,我们通过背景特征采样(BFS)模块和分层变压器对其进行完善,这使我们的模型能够汇总全局上下文信息,并有效利用视频中固有的语义关系。我们演示了我们的方法如何在两个TAL基准数据集上定位准确的动作实例并实现最先进的性能:Thumos14和ActivityNet 1.3。
Temporal action localization (TAL) is a task of identifying a set of actions in a video, which involves localizing the start and end frames and classifying each action instance. Existing methods have addressed this task by using predefined anchor windows or heuristic bottom-up boundary-matching strategies, which are major bottlenecks in inference time. Additionally, the main challenge is the inability to capture long-range actions due to a lack of global contextual information. In this paper, we present a novel anchor-free framework, referred to as HTNet, which predicts a set of <start time, end time, class> triplets from a video based on a Transformer architecture. After the prediction of coarse boundaries, we refine it through a background feature sampling (BFS) module and hierarchical Transformers, which enables our model to aggregate global contextual information and effectively exploit the inherent semantic relationships in a video. We demonstrate how our method localizes accurate action instances and achieves state-of-the-art performance on two TAL benchmark datasets: THUMOS14 and ActivityNet 1.3.