论文标题

GCORF:生成成分辐射场

gCoRF: Generative Compositional Radiance Fields

论文作者

BR, Mallikarjun, Tewari, Ayush, Pan, Xingang, Elgharib, Mohamed, Theobalt, Christian

论文摘要

对象的3D生成模型可实现具有3D控制的光真逼真图像合成。现有方法将场景建模为全局场景表示,而忽略了场景的组成方面。组成推理还可以启用各种编辑应用,此外还可以实现可概括的3D推理。在本文中,我们提出了一个组成生成模型,其中对象的每个语义部分被表示为仅从野外2D数据中学到的独立3D表示。我们从全球生成模型(GAN)开始,并通过2D分割掩码的监督学会将其分解为不同的语义部分。然后,我们学会将独立采样零件复合以创建连贯的全局场景。可以独立采样不同的部分,同时将其余的对象固定。我们在各种对象和零件上评估我们的方法,并演示编辑应用。

3D generative models of objects enable photorealistic image synthesis with 3D control. Existing methods model the scene as a global scene representation, ignoring the compositional aspect of the scene. Compositional reasoning can enable a wide variety of editing applications, in addition to enabling generalizable 3D reasoning. In this paper, we present a compositional generative model, where each semantic part of the object is represented as an independent 3D representation learned from only in-the-wild 2D data. We start with a global generative model (GAN) and learn to decompose it into different semantic parts using supervision from 2D segmentation masks. We then learn to composite independently sampled parts in order to create coherent global scenes. Different parts can be independently sampled while keeping the rest of the object fixed. We evaluate our method on a wide variety of objects and parts and demonstrate editing applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源