论文标题

单眼3D对象重建与GAN倒置

Monocular 3D Object Reconstruction with GAN Inversion

论文作者

Zhang, Junzhe, Ren, Daxuan, Cai, Zhongang, Yeo, Chai Kiat, Dai, Bo, Loy, Chen Change

论文摘要

从单眼图像中恢复纹理的3D网格是高度挑战的,尤其是对于缺乏3D地面真理的野外物体。在这项工作中,我们提出了网络version,这是一个新的框架,可通过利用预先训练的3D GAN的生成性先验来改善重建的框架。重建是通过在3D GAN中搜索最类似于目标网格的3D GAN中的潜在空间来实现重建。由于预先训练的GAN以网状几何形状和纹理的形式封装了富3D语义,因此在GAN歧管内进行搜索,因此自然地使重建的真实性和忠诚度正常。重要的是,这种正则化直接应用于3D空间,从而提供了在2D空间中未观察到的网格零件的关键指导。标准基准测试的实验表明,我们的框架获得了忠实的3D重建,并在观察到的部分和未观察到的部分中都具有一致的几何形状和纹理。此外,它可以很好地推广到不太常见的网格中,例如可变形物体的扩展表达。代码在https://github.com/junzhezhang/mesh-inversion上发布

Recovering a textured 3D mesh from a monocular image is highly challenging, particularly for in-the-wild objects that lack 3D ground truths. In this work, we present MeshInversion, a novel framework to improve the reconstruction by exploiting the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis. Reconstruction is achieved by searching for a latent space in the 3D GAN that best resembles the target mesh in accordance with the single view observation. Since the pre-trained GAN encapsulates rich 3D semantics in terms of mesh geometry and texture, searching within the GAN manifold thus naturally regularizes the realness and fidelity of the reconstruction. Importantly, such regularization is directly applied in the 3D space, providing crucial guidance of mesh parts that are unobserved in the 2D space. Experiments on standard benchmarks show that our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts. Moreover, it generalizes well to meshes that are less commonly seen, such as the extended articulation of deformable objects. Code is released at https://github.com/junzhezhang/mesh-inversion

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源