论文标题
Singraf:学习一个场景的3D生成辐射场
SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene
论文作者
论文摘要
生成模型在综合逼真的3D对象方面表现出了巨大的希望,但是它们需要大量的训练数据。我们介绍了Singraf,这是一种3D引用的生成模型,并通过单个场景的一些输入图像进行了训练。一旦受过训练,Singraf就会对这个3D场景产生不同的实现,该场景在不同的场景布局时保留了输入的外观。为此,我们以3D GAN体系结构的最新进展为基础,并在培训期间引入了一种新型的渐进式贴片歧视方法。通过几个实验,我们证明了Singraf产生的结果优于最接近的质量和多样性的相关作品。
Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data. We introduce SinGRAF, a 3D-aware generative model that is trained with a few input images of a single scene. Once trained, SinGRAF generates different realizations of this 3D scene that preserve the appearance of the input while varying scene layout. For this purpose, we build on recent progress in 3D GAN architectures and introduce a novel progressive-scale patch discrimination approach during training. With several experiments, we demonstrate that the results produced by SinGRAF outperform the closest related works in both quality and diversity by a large margin.