论文标题
Gaittake:通过暂时关注和关键点引导的嵌入步态识别
GaitTAKE: Gait Recognition by Temporal Attention and Keypoint-guided Embedding
论文作者
论文摘要
步态识别是指根据人的身体形状和步行方式对人的识别或识别,这些视频数据是从远处捕获的视频数据中得出的,被广泛用于预防犯罪,法医身份和社会保障中。但是,据我们所知,大多数现有方法都使用外观,姿势和时间feautures,而无需考虑用于全球和本地信息融合的学习时间关注机制。在本文中,我们提出了一个新型的步态识别框架,称为时间关注和关键点引导的嵌入(Gaittake),该框架有效地融合了基于时间注意的全球和局部外观特征以及时间聚集的人类姿势特征。实验结果表明,我们提出的方法在步态识别中获得了新的SOTA,而排名1的准确性为98.0%(正常),97.5%(袋)和92.2%(涂层)(涂层)在Casia-B Gait数据集中; OU-MVLP步态数据集的精度为90.4%。
Gait recognition, which refers to the recognition or identification of a person based on their body shape and walking styles, derived from video data captured from a distance, is widely used in crime prevention, forensic identification, and social security. However, to the best of our knowledge, most of the existing methods use appearance, posture and temporal feautures without considering a learned temporal attention mechanism for global and local information fusion. In this paper, we propose a novel gait recognition framework, called Temporal Attention and Keypoint-guided Embedding (GaitTAKE), which effectively fuses temporal-attention-based global and local appearance feature and temporal aggregated human pose feature. Experimental results show that our proposed method achieves a new SOTA in gait recognition with rank-1 accuracy of 98.0% (normal), 97.5% (bag) and 92.2% (coat) on the CASIA-B gait dataset; 90.4% accuracy on the OU-MVLP gait dataset.