论文标题

Sparse2Dense:学习使3D对象检测的3D功能致密

Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection

论文作者

Wang, Tianyu, Hu, Xiaowei, Liu, Zhengzhe, Fu, Chi-Wing

论文摘要

激光雷达生产的点云是大多数最新3D对象检测器的主要来源。然而,通常很难检测到稀疏或几个点的小,遥远和不完整的物体。我们提出了Sparse2Dense,这是一个新的框架,可通过学习使潜在空间中的点云吸收云,从而有效地提高3D检测性能。具体而言,我们首先以致密点云作为输入和设计稀疏点3D检测器(SDET),以常规点云作为输入来训练一个密集的点3D检测器(DDET)。重要的是,我们制定了轻巧的插件S2D模块和SDET中的点云重建模块,以使3D功能和训练SDET在DDET中的密集3D功能之后,以产生3D功能。因此,在推论中,SDET可以从常规(稀疏)点云输入中模拟密集的3D特征,而无需密集输入。我们在大规模的Waymo打开数据集和Waymo域改编数据集上评估了我们的方法,显示了其对艺术状态的高性能和效率。

LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源