论文标题
部分可观测时空混沌系统的无模型预测
AvatarGen: A 3D Generative Model for Animatable Human Avatars
论文作者
论文摘要
无监督的3D吸引的衣服人体具有各种外观和可控的几何形状对于创建虚拟人类化身和其他AR/VR应用非常重要。现有方法要么仅限于刚性对象建模,要么不生成生成,因此无法生成高质量的虚拟人类并使它们动画。在这项工作中,我们提出了Avatargen,这是第一种方法,它不仅可以具有高效率外观的几何形状,而且还可以散布人类的动画可控性,而仅需要2D图像进行训练。具体而言,我们将生成的3D人类合成分解为具有预定义的人姿势和形状的姿势引导的映射和规范表示,因此可以通过3D参数模型SMPL的指导将规范表示形式明确驱动到不同的姿势和形状。 Avatargen进一步引入了一个变形网络,以学习非刚性变形,以建模细粒的几何细节和姿势依赖性动力学。为了提高生成的人类化身的几何质量,它以符号距离场为几何代理,从而可以从SMPL的3D几何学先验中进行更直接的正则化。从这些设计中受益,我们的方法可以生成具有高质量外观和几何建模的动画3D人类化身,从而大大优于先前的3D gan。此外,它具有许多应用程序,例如单视重构造,重新动画和文本引导的合成/编辑。代码和预培训模型将在http://jeff95.me/projects/avatargen.html上找到。
Unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries is important for creating virtual human avatars and other AR/VR applications. Existing methods are either limited to rigid object modeling, or not generative and thus unable to generate high-quality virtual humans and animate them. In this work, we propose AvatarGen, the first method that enables not only geometry-aware clothed human synthesis with high-fidelity appearances but also disentangled human animation controllability, while only requiring 2D images for training. Specifically, we decompose the generative 3D human synthesis into pose-guided mapping and canonical representation with predefined human pose and shape, such that the canonical representation can be explicitly driven to different poses and shapes with the guidance of a 3D parametric human model SMPL. AvatarGen further introduces a deformation network to learn non-rigid deformations for modeling fine-grained geometric details and pose-dependent dynamics. To improve the geometry quality of the generated human avatars, it leverages the signed distance field as geometric proxy, which allows more direct regularization from the 3D geometric priors of SMPL. Benefiting from these designs, our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs. Furthermore, it is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing. Code and pre-trained model will be available at http://jeff95.me/projects/avatargen.html.