论文标题
360ROAM:使用几何形状觉醒的实时室内漫游360 $^\ circ $ radiance字段
360Roam: Real-Time Indoor Roaming Using Geometry-Aware 360$^\circ$ Radiance Fields
论文作者
论文摘要
稀疏360 $^\ Circ $图像中的虚拟旅行被广泛使用,同时阻碍了光滑而沉浸式的漫游体验。神经辐射场(NERF)的出现在综合新观点方面展示了重大进展,从而释放了沉浸式现场探索的潜力。然而,以前的NERF主要专注于以对象为中心的方案,由于场景参数化的限制,当应用于向外的和大规模的场景时,导致了明显的性能退化。为了实现无缝和实时的室内漫游,我们提出了一种新的方法,该方法使用具有自适应分配的局部辐射场的几何学辐射场。最初,我们采用多个360 $^\ circ $图像室内场景的图像来逐步重建源自概率的占用图的形式的显式几何形状,该几何形式衍生自全局全向辐射场。随后,我们通过基于恢复的几何形状的自适应分裂和纠纷策略来分配局部辐射场。通过结合全球光辉场的几何学抽样和分解,我们的系统有效地利用了位置编码和紧凑的神经网络来增强渲染质量和速度。此外,现场提取的平面图有助于提供视觉指导,从而有助于逼真的漫游体验。为了证明系统的有效性,我们策划了一个360美元$^\ circ $图像的多样化数据集,其中包括各种现实生活中的场景,我们进行了广泛的实验。与基线方法的定量和定性比较说明了我们系统在大型室内场景中的出色性能。
Virtual tour among sparse 360$^\circ$ images is widely used while hindering smooth and immersive roaming experiences. The emergence of Neural Radiance Field (NeRF) has showcased significant progress in synthesizing novel views, unlocking the potential for immersive scene exploration. Nevertheless, previous NeRF works primarily focused on object-centric scenarios, resulting in noticeable performance degradation when applied to outward-facing and large-scale scenes due to limitations in scene parameterization. To achieve seamless and real-time indoor roaming, we propose a novel approach using geometry-aware radiance fields with adaptively assigned local radiance fields. Initially, we employ multiple 360$^\circ$ images of an indoor scene to progressively reconstruct explicit geometry in the form of a probabilistic occupancy map, derived from a global omnidirectional radiance field. Subsequently, we assign local radiance fields through an adaptive divide-and-conquer strategy based on the recovered geometry. By incorporating geometry-aware sampling and decomposition of the global radiance field, our system effectively utilizes positional encoding and compact neural networks to enhance rendering quality and speed. Additionally, the extracted floorplan of the scene aids in providing visual guidance, contributing to a realistic roaming experience. To demonstrate the effectiveness of our system, we curated a diverse dataset of 360$^\circ$ images encompassing various real-life scenes, on which we conducted extensive experiments. Quantitative and qualitative comparisons against baseline approaches illustrated the superior performance of our system in large-scale indoor scene roaming.