论文标题

vid2actor:野外视频中的自由视图动画人合成

Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild

论文作者

Weng, Chung-Yi, Curless, Brian, Kemelmacher-Shlizerman, Ira

论文摘要

考虑到一个人的“野外”视频,我们重建了视频中该人的动画模型。可以通过学习的控件在任何相机视图中呈现输出模型,而无需显式3D网格重建。我们方法的核心是体积的3D人体表示,该表示通过在输入视频中训练的深层网络重建,从而实现了新颖的姿势/视图合成。我们的方法是对基于GAN的图像到图像翻译的进步,因为它允许通过内部3D表示为任何姿势和摄像机的图像合成,而与基于网格的学习一样,它不需要预钉模型或地面真相网格进行训练。实验验证了综合数据的设计选择和产生结果,以及进行无限制活动(例如跳舞或打网球)的不同人的真实视频。最后,我们通过学习模型演示了运动重新定位和子弹时间渲染。

Given an "in-the-wild" video of a person, we reconstruct an animatable model of the person in the video. The output model can be rendered in any body pose to any camera view, via the learned controls, without explicit 3D mesh reconstruction. At the core of our method is a volumetric 3D human representation reconstructed with a deep network trained on input video, enabling novel pose/view synthesis. Our method is an advance over GAN-based image-to-image translation since it allows image synthesis for any pose and camera via the internal 3D representation, while at the same time it does not require a pre-rigged model or ground truth meshes for training, as in mesh-based learning. Experiments validate the design choices and yield results on synthetic data and on real videos of diverse people performing unconstrained activities (e.g. dancing or playing tennis). Finally, we demonstrate motion re-targeting and bullet-time rendering with the learned models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源