论文标题

Light3Dpose:从多个视图中实时多人3D PoseSimation

Light3DPose: Real-time Multi-Person 3D PoseEstimation from Multiple Views

论文作者

Elmi, Alessio, Mazzini, Davide, Tortella, Pietro

论文摘要

我们提出了一种从几个校准的相机视图中对多人进行3D姿势估算的方法。我们的架构利用了最近提出的未投影层,汇总映射从2D姿势估计器主链到3D场景的全面表示。然后,通过完全横向的体积网络和一个以亚素的精度提取3D骨架的解码阶段来详细说明这种中间表示。我们的方法使用一些看不见的视图在CMU Panoptic数据集上实现了最新的MPJPE,即使使用单个输入视图,也可以获得竞争结果。我们还通过针对获得良好性能指标的公开货架数据集进行测试来评估模型的转移学习能力。所提出的方法本质上是有效的:作为一种纯粹的自下而上的方法,它在计算上独立于现场人数。此外,即使2D部分的计算负担与输入视图的数量线性缩放,总体体系结构能够利用一个非常轻巧的2D骨架,该骨架的数量级比体积量的对应物快,从而产生了快速的推理时间。该系统可以以6 fps运行,可在单个1080TI GPU上处理多达10个相机视图。

We present an approach to perform 3D pose estimation of multiple people from a few calibrated camera views. Our architecture, leveraging the recently proposed unprojection layer, aggregates feature-maps from a 2D pose estimator backbone into a comprehensive representation of the 3D scene. Such intermediate representation is then elaborated by a fully-convolutional volumetric network and a decoding stage to extract 3D skeletons with sub-voxel accuracy. Our method achieves state of the art MPJPE on the CMU Panoptic dataset using a few unseen views and obtains competitive results even with a single input view. We also assess the transfer learning capabilities of the model by testing it against the publicly available Shelf dataset obtaining good performance metrics. The proposed method is inherently efficient: as a pure bottom-up approach, it is computationally independent of the number of people in the scene. Furthermore, even though the computational burden of the 2D part scales linearly with the number of input views, the overall architecture is able to exploit a very lightweight 2D backbone which is orders of magnitude faster than the volumetric counterpart, resulting in fast inference time. The system can run at 6 FPS, processing up to 10 camera views on a single 1080Ti GPU.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源