论文标题

FSD-10:用于竞争运动内容分析的数据集

FSD-10: A Dataset for Competitive Sports Content Analysis

论文作者

Liu, Shenlan, Liu, Xiang, Huang, Gao, Feng, Lin, Hu, Lianyu, Jiang, Dong, Zhang, Aibin, Liu, Yang, Qiao, Hong

论文摘要

在视频分析中,行动识别是一个重要且具有挑战性的问题。尽管在过去的十年中,通过深度学习的发展取得了行动认可的进展,但在竞争性运动内容分析中,这种过程却很慢。为了促进竞争性体育视频片段的动作识别研究,我们介绍了一个花样滑冰数据集(FSD-10),以进行细化的运动内容分析。为此,我们从2017 - 2018年的全球花样滑冰锦标赛中收集了1484个剪辑,其中包括男子/女士计划中的10种不同行动。每个夹子的速率为每秒30帧,分辨率为1080美元$ \ $ 720。然后,这些剪辑由类型,执行等级,滑冰者信息等级的专家注释。为了在花样滑冰中构建基准识别基线,我们评估了FSD-10上的最新动作识别方法。在体育领域中领域知识非常关注的想法的激励之中,我们提出了一个基于关键的时间段网络(KTSN)进行分类并实现出色的性能。实验结果表明,FSD-10是基准测试动作识别算法的理想数据集,因为它需要准确提取动作运动而不是动作姿势。我们希望FSD-10旨在采用大量的细化动作,可以成为开发更健壮和高级动作识别模型的新挑战。

Action recognition is an important and challenging problem in video analysis. Although the past decade has witnessed progress in action recognition with the development of deep learning, such process has been slow in competitive sports content analysis. To promote the research on action recognition from competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10) for finegrained sports content analysis. To this end, we collect 1484 clips from the worldwide figure skating championships in 2017-2018, which consist of 10 different actions in men/ladies programs. Each clip is at a rate of 30 frames per second with resolution 1080 $\times$ 720. These clips are then annotated by experts in type, grade of execution, skater info, .etc. To build a baseline for action recognition in figure skating, we evaluate state-of-the-art action recognition methods on FSD-10. Motivated by the idea that domain knowledge is of great concern in sports field, we propose a keyframe based temporal segment network (KTSN) for classification and achieve remarkable performance. Experimental results demonstrate that FSD-10 is an ideal dataset for benchmarking action recognition algorithms, as it requires to accurately extract action motions rather than action poses. We hope FSD-10, which is designed to have a large collection of finegrained actions, can serve as a new challenge to develop more robust and advanced action recognition models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源