论文标题

神经自适应场景追踪

Neural Adaptive SCEne Tracing

论文作者

Li, Rui, Rückert, Darius, Wang, Yuanhao, Idoughi, Ramzi, Heidrich, Wolfgang

论文摘要

通过隐式神经网络的神经渲染最近已成为场景重建的一个有吸引力的主张,尽管以高计算成本达到了出色的质量。尽管最近一代这样的方法在渲染(推理)时间取得了进展,但在改善重建时间(培训)时间方面几乎没有取得进展。在这项工作中,我们提出了神经自适应场景追踪(Nascent),这是基于直接训练混合显式明上神经代表的第一种神经渲染方法。 Nascent使用一个分层的OCTREE表示,每个叶子节点一个神经网络,并将此表示形式与两个阶段的采样过程相结合,该过程将射线样本集中在它们最重要的地方靠近对象表面。结果,新生能够重建具有挑战性的场景,包括诸如无人机捕获的户外环境(例如较小的户外环境)以及具有较高几何复杂性的小场景。在质量和训练时间方面,新生的效果优于现有的神经渲染方法。

Neural rendering with implicit neural networks has recently emerged as an attractive proposition for scene reconstruction, achieving excellent quality albeit at high computational cost. While the most recent generation of such methods has made progress on the rendering (inference) times, very little progress has been made on improving the reconstruction (training) times. In this work, we present Neural Adaptive Scene Tracing (NAScenT), the first neural rendering method based on directly training a hybrid explicit-implicit neural representation. NAScenT uses a hierarchical octree representation with one neural network per leaf node and combines this representation with a two-stage sampling process that concentrates ray samples where they matter most near object surfaces. As a result, NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments, as well as small scenes with high geometric complexity. NAScenT outperforms existing neural rendering approaches in terms of both quality and training time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源