论文标题

深层物理意识到的布料变形的单眼人物绩效捕获

Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture

论文作者

Li, Yue, Habermann, Marc, Thomaszewski, Bernhard, Coros, Stelian, Beeler, Thabo, Theobalt, Christian

论文摘要

最近的单眼人体绩效捕获方法显示了单个RGB摄像机的全身引人入胜的密集跟踪结果。但是,现有方法要么根本不估计衣服,要么用简单的几何先验来估计布料变形,而不是考虑到基本的物理原理。这导致其重建中有明显的文物,例如烘烤的皱纹,看似反抗重力的令人难以置信的变形以及布与身体之间的相交。为了解决这些问题,我们提出了一种基于人的基于学习的方法,该方法将模拟层集成到训练过程中,以在弱监督的深层单眼人类绩效捕获的背景下​​首次提供物理监督。我们展示了将物理纳入训练过程的方式如何改善学习布的变形,使其成为单独的几何形状建模,并在很大程度上减少了布体的交集。在训练期间,我们的方法仅依靠弱的2D多视图监督,我们的方法对当前最新方法有了显着改善,因此是朝着现实的单眼捕获一件衣服人的整个变形表面迈出的明确一步。

Recent monocular human performance capture approaches have shown compelling dense tracking results of the full body from a single RGB camera. However, existing methods either do not estimate clothing at all or model cloth deformation with simple geometric priors instead of taking into account the underlying physical principles. This leads to noticeable artifacts in their reconstructions, e.g. baked-in wrinkles, implausible deformations that seemingly defy gravity, and intersections between cloth and body. To address these problems, we propose a person-specific, learning-based method that integrates a simulation layer into the training process to provide for the first time physics supervision in the context of weakly supervised deep monocular human performance capture. We show how integrating physics into the training process improves the learned cloth deformations, allows modeling clothing as a separate piece of geometry, and largely reduces cloth-body intersections. Relying only on weak 2D multi-view supervision during training, our approach leads to a significant improvement over current state-of-the-art methods and is thus a clear step towards realistic monocular capture of the entire deforming surface of a clothed human.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源