论文标题
神经点云合并的自我采样
Self-Sampling for Neural Point Cloud Consolidation
论文作者
论文摘要
我们介绍了一种新型的神经点云合并技术,该技术仅从输入点云中学习。与其他点上取样方法不同,通过本工作,我们从全球子集中学习了形状。我们反复使用用于训练深神经网络的全局子集的输入点云进行自样本。具体而言,我们根据所需的合并标准(例如,在稀疏区域生成尖锐的点或点)定义源和目标子集。该网络学习从源到目标子集的映射,并隐式学习以合并点云。在推断期间,网络被从输入中以随机的点子集馈送,它取代了该网络,以综合合并点集。我们利用神经网络的感应偏置来消除噪声和异常值,这是云云整合的一个众所周知的困难问题。网络的共享权重在整个形状上进行了优化,学习非本地统计数据并利用局部规模的几何形状的复发。具体而言,网络编码固定的局部内核内的基础形状表面的分布,从而最大程度地解释了基础形状表面。我们证明了从各种形状合并点集的能力,同时消除了异常值和噪声。
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point upsampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.