论文标题

深度多深度全景图综合

Deep Multi Depth Panoramas for View Synthesis

论文作者

Lin, Kai-En, Xu, Zexiang, Mildenhall, Ben, Srinivasan, Pratul P., Hold-Geoffroy, Yannick, DiVerdi, Stephen, Sun, Qi, Sunkavalli, Kalyan, Ramamoorthi, Ravi

论文摘要

我们为多相机360 $^{\ circ} $ Panorama Capture Capture钻机提出了一种基于学习的方法,用于新的视图合成。先前的工作从此类数据中构造了RGBD全景图,从而允许使用少量翻译的视图综合,但无法处理由大型翻译引起的不合时宜和观看依赖性效果。为了解决这个问题,我们提出了一个新颖的场景表示形式 - 多深度全景(MDP) - 由多个代表场景几何和外观的RGBD $α$全景组成。我们演示了一种基于神经网络的深度方法,可从多相机360 $^{\ circ} $图像重建MDP。 MDP比以前的3D场景表示更紧凑,并启用高质量,有效的新视图渲染。我们通过对合成和真实数据的实验以及与先前涵盖基于学习的方法和基于经典RGBD的方法的先前最新方法进行比较来证明这一点。

We propose a learning-based approach for novel view synthesis for multi-camera 360$^{\circ}$ panorama capture rigs. Previous work constructs RGBD panoramas from such data, allowing for view synthesis with small amounts of translation, but cannot handle the disocclusions and view-dependent effects that are caused by large translations. To address this issue, we present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$α$ panoramas that represent both scene geometry and appearance. We demonstrate a deep neural network-based method to reconstruct MDPs from multi-camera 360$^{\circ}$ images. MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering. We demonstrate this via experiments on both synthetic and real data and comparisons with previous state-of-the-art methods spanning both learning-based approaches and classical RGBD-based methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源