论文标题

通过开源模拟器进行手动标签的3D检测

Manual-Label Free 3D Detection via An Open-Source Simulator

论文作者

Yang, Zhen, Zhang, Chi, Guo, Huiming, Zhang, Zhaoxiang

论文摘要

基于激光雷达的3D对象探测器通常需要大量详细标记的点云数据进行培训,但是这些详细的标签通常价格昂贵。在本文中,我们提出了一种利用Carla模拟器生成大量自标记的训练样本,并引入新的域Adaptive voxelnet(da-voxelnet),该算法可以从合成数据跨越合成数据。自标记的训练样品是由嵌入Carla模拟器中的一组高质量的3D模型和提议的LIDAR引导的采样算法生成的。然后提出了一个集成样品级DA模块和锚固级DA模块的DA-VOXELNET,以使通过合成数据训练的检测器适应实际情况。实验结果表明,在Kitti评估集中提出的无监督的DA 3D检测器可以在BEV模式和3D模式下分别获得76.66%和56.64%的地图。结果揭示了训练基于激光痛的3D检测器而没有任何手动标签的有希望的观点。

LiDAR based 3D object detectors typically need a large amount of detailed-labeled point cloud data for training, but these detailed labels are commonly expensive to acquire. In this paper, we propose a manual-label free 3D detection algorithm that leverages the CARLA simulator to generate a large amount of self-labeled training samples and introduces a novel Domain Adaptive VoxelNet (DA-VoxelNet) that can cross the distribution gap from the synthetic data to the real scenario. The self-labeled training samples are generated by a set of high quality 3D models embedded in a CARLA simulator and a proposed LiDAR-guided sampling algorithm. Then a DA-VoxelNet that integrates both a sample-level DA module and an anchor-level DA module is proposed to enable the detector trained by the synthetic data to adapt to real scenario. Experimental results show that the proposed unsupervised DA 3D detector on KITTI evaluation set can achieve 76.66% and 56.64% mAP on BEV mode and 3D mode respectively. The results reveal a promising perspective of training a LIDAR-based 3D detector without any hand-tagged label.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源