论文标题
scfusion:通过语义完成的实时增量场景重建
SCFusion: Real-time Incremental Scene Reconstruction with Semantic Completion
论文作者
论文摘要
深度数据的实时场景重建不可避免地会遭受遮挡,从而导致3D模型不完整。反过来,部分重建限制了在增强现实,机器人导航和3D映射的背景下利用它们用于应用程序的算法的性能。大多数方法通过预测缺失的几何形状作为离线优化来解决此问题,从而与实时应用程序不相容。我们提出了一个框架,通过基于深度图的输入序列,以增量和实时的方式共同执行场景重建和语义场景完成。我们的框架依赖于一种新型的神经体系结构,旨在处理占用图和利用体素状态,以准确有效地将语义完成与3D全局模型融合在一起。我们在定量和定性上评估了提出的方法,表明我们的方法可以实时获得准确的3D语义场景完成。
Real-time scene reconstruction from depth data inevitably suffers from occlusion, thus leading to incomplete 3D models. Partial reconstructions, in turn, limit the performance of algorithms that leverage them for applications in the context of, e.g., augmented reality, robotic navigation, and 3D mapping. Most methods address this issue by predicting the missing geometry as an offline optimization, thus being incompatible with real-time applications. We propose a framework that ameliorates this issue by performing scene reconstruction and semantic scene completion jointly in an incremental and real-time manner, based on an input sequence of depth maps. Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model. We evaluate the proposed approach quantitatively and qualitatively, demonstrating that our method can obtain accurate 3D semantic scene completion in real-time.