论文标题
用于拓扑各物体的分离图像合成的生成可变形辐射场
Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects
论文作者
论文摘要
3D感知的生成模型已经证明了它们的出色性能,以从单眼2D图像集合中生成3D神经辐射场(NERF),即使是拓扑相反的对象类别。但是,这些方法仍然缺乏分别控制生成的辐射场中对象的形状和外观的能力。在本文中,我们提出了一个生成模型,用于合成具有分离形状和外观变化的拓扑变量对象的辐射场。我们的方法生成可变形的辐射字段,该字段构建了对象的密度字段之间的密度对应关系,并在共享模板字段中编码它们的外观。我们的分解是以无监督的方式实现的,而没有向先前的3D感知gan培训引入额外的标签。我们还开发了一种有效的图像反转方案,用于在真实的单眼图像中重建对象的辐射场并操纵其形状和外观。实验表明,我们的方法可以从非结构化的单眼图像中成功学习生成模型,并很好地解散具有较大拓扑方差的物体(例如椅子)的形状和外观。经过合成数据训练的模型可以忠实地在给定的单个图像中重建真实对象,并获得高质量的纹理和形状编辑结果。
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images even for topology-varying object categories. However, these methods still lack the capability to separately control the shape and appearance of the objects in the generated radiance fields. In this paper, we propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations. Our method generates deformable radiance fields, which builds the dense correspondence between the density fields of the objects and encodes their appearances in a shared template field. Our disentanglement is achieved in an unsupervised manner without introducing extra labels to previous 3D-aware GAN training. We also develop an effective image inversion scheme for reconstructing the radiance field of an object in a real monocular image and manipulating its shape and appearance. Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e.g., chairs) with large topological variance. The model trained on synthetic data can faithfully reconstruct the real object in a given single image and achieve high-quality texture and shape editing results.