论文标题
通过Lipschitz正规化学习光滑的神经功能
Learning Smooth Neural Functions via Lipschitz Regularization
论文作者
论文摘要
神经隐式领域最近已成为3D形状的有用表示。这些字段通常表示为神经网络,这些神经网络将潜在的描述符和3D坐标映射到隐式函数值。神经场的潜在描述符充当其代表的3D形状的变形手柄。因此,相对于此描述符的平滑度对于执行形状编辑操作至关重要。在这项工作中,我们引入了一种新颖的正则化,旨在通过对田野Lipschitz常数的上限进行惩罚来鼓励神经场中的平滑潜在空间。与先前的Lipschitz正则化网络相比,我们的计算快速,可以以四行代码实现,并且需要为几何应用而进行最小的高参数调整。我们证明了方法对形状插值和外推的有效性,以及从3D点云中重建的部分形状重建,显示了对现有的最新和非注册基线的定性和定量改进。
Neural implicit fields have recently emerged as a useful representation for 3D shapes. These fields are commonly represented as neural networks which map latent descriptors and 3D coordinates to implicit function values. The latent descriptor of a neural field acts as a deformation handle for the 3D shape it represents. Thus, smoothness with respect to this descriptor is paramount for performing shape-editing operations. In this work, we introduce a novel regularization designed to encourage smooth latent spaces in neural fields by penalizing the upper bound on the field's Lipschitz constant. Compared with prior Lipschitz regularized networks, ours is computationally fast, can be implemented in four lines of code, and requires minimal hyperparameter tuning for geometric applications. We demonstrate the effectiveness of our approach on shape interpolation and extrapolation as well as partial shape reconstruction from 3D point clouds, showing both qualitative and quantitative improvements over existing state-of-the-art and non-regularized baselines.