论文标题
nerfies:可变形的神经辐射场
Nerfies: Deformable Neural Radiance Fields
论文作者
论文摘要
我们提出了第一种能够使用手机随便捕获的照片/视频来重建可变形场景的方法。我们的方法通过优化一个额外的连续体积变形场来增强神经辐射场(NERF),从而扭曲每个观察到的点到一个规范的5D NERF中。我们观察到这些类似NERF的变形场容易容易局部最小值,并为基于坐标的模型提出了一种粗到1的优化方法,该方法允许进行更强大的优化。通过将原理从几何处理和物理模拟转化为类似NERF的模型,我们提出了变形场的弹性正则化,以进一步改善鲁棒性。我们表明,我们的方法可以将随意捕获的自拍照片/视频转变为可变形的NERF模型,从而从任意观点中允许对主题的影片效率,我们将其称为“ nerfies”。我们通过使用两台手机的钻机收集时间同步数据来评估我们的方法,从而在不同的角度产生同一姿势的火车/验证图像。我们表明,我们的方法忠实地重建了非固定的场景,并以高忠诚重现了看不见的观点。
We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones. Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF. We observe that these NeRF-like deformation fields are prone to local minima, and propose a coarse-to-fine optimization method for coordinate-based models that allows for more robust optimization. By adapting principles from geometry processing and physical simulation to NeRF-like models, we propose an elastic regularization of the deformation field that further improves robustness. We show that our method can turn casually captured selfie photos/videos into deformable NeRF models that allow for photorealistic renderings of the subject from arbitrary viewpoints, which we dub "nerfies." We evaluate our method by collecting time-synchronized data using a rig with two mobile phones, yielding train/validation images of the same pose at different viewpoints. We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.