论文标题
Branspose:用于人类姿势估计的瓶颈变压器,并进行自我监督的预训练
BTranspose: Bottleneck Transformers for Human Pose Estimation with Self-Supervised Pre-Training
论文作者
论文摘要
由于关键点的数量通常很大(〜17),因此2D人姿势估计的任务具有挑战性,因此需要使用强大的神经网络体系结构和训练管道,这些培训管道可以从输入图像中捕获相关特征。然后将这些特征汇总为进行准确的热图预测,从中可以推断出人体部位的最终关键点。许多文献中的许多论文都使用基于CNN的架构进行主链,并且/或将其与变压器结合在一起,然后将功能聚合以做出最终的关键点预测[1]。在本文中,我们考虑了最近提出的瓶颈变压器[2],该变压器有效地结合了CNN和多头自我注意力(MHSA)层,并将其与变压器编码器集成在一起,并将其应用于2D人类姿势估计的任务。我们考虑了不同的骨干架构,并使用Dino自我监督的学习方法进行预训练[3],发现这种预训练可提高整体预测准确性。我们称我们的模型BTRANSPOSE,并且实验表明,在可可验证集中,我们的模型达到了76.4的AP,这与[1]等其他方法具有竞争力,并且网络参数较少。此外,我们还介绍了MHSA块和变压器编码层上最终预测的关键点的依赖项,从而为网络在中级和高级别提供的图像子区域提供了线索。
The task of 2D human pose estimation is challenging as the number of keypoints is typically large (~ 17) and this necessitates the use of robust neural network architectures and training pipelines that can capture the relevant features from the input image. These features are then aggregated to make accurate heatmap predictions from which the final keypoints of human body parts can be inferred. Many papers in literature use CNN-based architectures for the backbone, and/or combine it with a transformer, after which the features are aggregated to make the final keypoint predictions [1]. In this paper, we consider the recently proposed Bottleneck Transformers [2], which combine CNN and multi-head self attention (MHSA) layers effectively, and we integrate it with a Transformer encoder and apply it to the task of 2D human pose estimation. We consider different backbone architectures and pre-train them using the DINO self-supervised learning method [3], this pre-training is found to improve the overall prediction accuracy. We call our model BTranspose, and experiments show that on the COCO validation set, our model achieves an AP of 76.4, which is competitive with other methods such as [1] and has fewer network parameters. Furthermore, we also present the dependencies of the final predicted keypoints on both the MHSA block and the Transformer encoder layers, providing clues on the image sub-regions the network attends to at the mid and high levels.