论文标题
RCP:3D点云上场景流估计的最接近点
RCP: Recurrent Closest Point for Scene Flow Estimation on 3D Point Clouds
论文作者
论文摘要
3D运动估计(包括场景流和点云注册)引起了人们的兴趣。受2D流估计的启发,最近的方法采用了深层神经网络来构建成本量,以估算准确的3D流。但是,这些方法受到以下事实的限制:由于数据结构不规则,很难在点云上定义搜索窗口。在本文中,我们通过一种简单但有效的方法避免了这种不规则性。我们将问题分解为两个交错阶段,其中3D流在第一阶段优化了点,然后在第二阶段全球正规化。因此,循环网络仅接收常规点信息作为输入。在实验中,我们在3D场景流量估计和点云注册任务上评估了所提出的方法。对于3D场景流量估计,我们对广泛使用的Flaythings3d和Kittidataset进行比较。对于Point Cloud注册,我们遵循以前的工作,并以较大的姿势和部分与ModelNet40重叠的数据对评估数据对。结果表明,我们的方法优于先前的方法,并在3D场景流量估计和点云注册上实现了新的最新性能,这证明了在不规则点云数据上提出的零级方法的优越性。
3D motion estimation including scene flow and point cloud registration has drawn increasing interest. Inspired by 2D flow estimation, recent methods employ deep neural networks to construct the cost volume for estimating accurate 3D flow. However, these methods are limited by the fact that it is difficult to define a search window on point clouds because of the irregular data structure. In this paper, we avoid this irregularity by a simple yet effective method.We decompose the problem into two interlaced stages, where the 3D flows are optimized point-wisely at the first stage and then globally regularized in a recurrent network at the second stage. Therefore, the recurrent network only receives the regular point-wise information as the input. In the experiments, we evaluate the proposed method on both the 3D scene flow estimation and the point cloud registration task. For 3D scene flow estimation, we make comparisons on the widely used FlyingThings3D and KITTIdatasets. For point cloud registration, we follow previous works and evaluate the data pairs with large pose and partially overlapping from ModelNet40. The results show that our method outperforms the previous method and achieves a new state-of-the-art performance on both 3D scene flow estimation and point cloud registration, which demonstrates the superiority of the proposed zero-order method on irregular point cloud data.