论文标题
中心灌注:3D对象检测的基于中心的雷达和相机融合
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection
论文作者
论文摘要
自动驾驶汽车中的感知系统负责检测和跟踪周围的物体。这通常是通过利用几种感应方式来提高鲁棒性和准确性来完成的,这使得传感器融合成为感知系统的关键部分。在本文中,我们专注于雷达和相机传感器融合的问题,并提出了一种中融合方法,以利用雷达和摄像头数据进行3D对象检测。我们的方法(称为中心灌注)首先使用中心点检测网络来检测对象,通过识别图像上的中心点。然后,它使用一种新型的基于Frustum的方法解决了关键数据关联问题,将雷达检测与其相应对象的中心点相关联。相关的雷达检测用于生成基于雷达的特征图以补充图像特征,并回归到对象属性,例如深度,旋转和速度。我们在具有挑战性的Nuscenes数据集上评估了中心灌注,在该数据集中,它改善了基于最先进的摄像头算法的整体Nuscenes检测评分(NDS)超过12%。我们进一步表明,中心灌注可显着提高速度估计的精度,而无需使用任何其他时间信息。该代码可从https://github.com/mrnabati/centerfusion获得。
The perception system in autonomous vehicles is responsible for detecting and tracking the surrounding objects. This is usually done by taking advantage of several sensing modalities to increase robustness and accuracy, which makes sensor fusion a crucial part of the perception system. In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection. Our approach, called CenterFusion, first uses a center point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object's center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity. We evaluate CenterFusion on the challenging nuScenes dataset, where it improves the overall nuScenes Detection Score (NDS) of the state-of-the-art camera-based algorithm by more than 12%. We further show that CenterFusion significantly improves the velocity estimation accuracy without using any additional temporal information. The code is available at https://github.com/mrnabati/CenterFusion .