论文标题

Dynibar:基于神经动态图像渲染

DynIBaR: Neural Dynamic Image-Based Rendering

论文作者

Li, Zhengqi, Wang, Qianqian, Cole, Forrester, Tucker, Richard, Snavely, Noah

论文摘要

我们解决了从单眼视频中综合新观点的问题,描绘了一个复杂的动态场景。基于时间变化的神经辐射场(又称动态NERF)的最新方法在此任务上显示出令人印象深刻的结果。但是,对于具有复杂对象运动和不受控制的相机轨迹的长视频,这些方法可能会产生模糊或不准确的效果图,从而阻碍它们在现实世界中的应用。我们没有在MLP的权重中编码整个动态场景,而是提出了一种新方法,该方法通过采用基于体积图像的渲染框架来解决这些局限性,该框架通过以场景 - 动态感知的方式汇总附近视图中的特征来综合新观点。我们的系统保留了先前方法的优势,它可以对复杂的场景和观看依赖性效果进行建模,但也可以从长时间的视频中构造合成的照片真实的小说视图,这些视频具有复杂的场景动力学,并带有不受限制的摄像头轨迹。我们证明了对动态场景数据集的最先进方法的显着改进,并且还将我们的方法应用于具有挑战性的摄像头和对象运动的野外视频,而先前的方法无法产生高质量的效果图。我们的项目网页位于dynibar.github.io。

We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene. State-of-the-art methods based on temporally varying Neural Radiance Fields (aka dynamic NeRFs) have shown impressive results on this task. However, for long videos with complex object motions and uncontrolled camera trajectories, these methods can produce blurry or inaccurate renderings, hampering their use in real-world applications. Instead of encoding the entire dynamic scene within the weights of MLPs, we present a new approach that addresses these limitations by adopting a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views in a scene-motion-aware manner. Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories. We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets, and also apply our approach to in-the-wild videos with challenging camera and object motion, where prior methods fail to produce high-quality renderings. Our project webpage is at dynibar.github.io.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源