论文标题
SYMMNERF:学习探索对称性的单视图查看综合
SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis
论文作者
论文摘要
我们研究了从单个图像中对象合成的新型问题。现有方法证明了单视图合成中的潜力。但是,他们仍然无法恢复出色的外观细节,尤其是在自封闭的地区。这是因为单个视图仅提供有限的信息。我们观察到,人造物体通常表现出对称外观,从而引入了更多的先验知识。在此激励的情况下,我们研究了明确嵌入对称性的潜在性能提高到场景表示中。在本文中,我们提出了Symmnerf,Symmnerf是基于神经辐射场(NERF)的框架,该框架结合了对称先验的引入局部和全球条件。特别是,Symmnerf采用像素对齐的图像特征和相应的对称特征作为NERF的额外输入,其参数是由超网络生成的。由于参数是在图像编码的潜在代码上的条件,因此Symmnerf是独立的,可以推广到新场景。关于合成和现实世界数据集的实验表明,Symmnerf综合了新的视图,无论姿势转换如何,都可以使用更多细节,并在应用于看不见的对象时证明了良好的概括。代码可在以下网址获得:https://github.com/xingyi-li/symmnerf。
We study the problem of novel view synthesis of objects from a single image. Existing methods have demonstrated the potential in single-view view synthesis. However, they still fail to recover the fine appearance details, especially in self-occluded areas. This is because a single view only provides limited information. We observe that manmade objects usually exhibit symmetric appearances, which introduce additional prior knowledge. Motivated by this, we investigate the potential performance gains of explicitly embedding symmetry into the scene representation. In this paper, we propose SymmNeRF, a neural radiance field (NeRF) based framework that combines local and global conditioning under the introduction of symmetry priors. In particular, SymmNeRF takes the pixel-aligned image features and the corresponding symmetric features as extra inputs to the NeRF, whose parameters are generated by a hypernetwork. As the parameters are conditioned on the image-encoded latent codes, SymmNeRF is thus scene-independent and can generalize to new scenes. Experiments on synthetic and real-world datasets show that SymmNeRF synthesizes novel views with more details regardless of the pose transformation, and demonstrates good generalization when applied to unseen objects. Code is available at: https://github.com/xingyi-li/SymmNeRF.