论文标题
TT-NF:张量火车神经场
TT-NF: Tensor Train Neural Fields
论文作者
论文摘要
学习神经领域一直是深度学习研究中的一个积极主题,将重点放在寻找更紧凑,易于拟合的表现方面。在本文中,我们介绍了一种新型的低级别表示,称为张量训练神经场(TT-NF),用于在致密的常规网格上学习神经场和从中抽样的有效方法。我们的表示是对神经场的TT参数化,并经过反向传播训练,以最大程度地减少非凸目标。我们分析了低级压缩对两个设置下游任务质量指标的影响。首先,我们在张张量denoising的沙盒任务中演示了方法的效率,该任务与旨在最小化重建误差的基于SVD的方案进行了比较。此外,我们将所提出的方法应用于神经辐射场,其中只能通过学习才能发现与最佳质量相对应的田间的低级别结构。
Learning neural fields has been an active topic in deep learning research, focusing, among other issues, on finding more compact and easy-to-fit representations. In this paper, we introduce a novel low-rank representation termed Tensor Train Neural Fields (TT-NF) for learning neural fields on dense regular grids and efficient methods for sampling from them. Our representation is a TT parameterization of the neural field, trained with backpropagation to minimize a non-convex objective. We analyze the effect of low-rank compression on the downstream task quality metrics in two settings. First, we demonstrate the efficiency of our method in a sandbox task of tensor denoising, which admits comparison with SVD-based schemes designed to minimize reconstruction error. Furthermore, we apply the proposed approach to Neural Radiance Fields, where the low-rank structure of the field corresponding to the best quality can be discovered only through learning.