论文标题
雷达+RGB专注于自动驾驶汽车中强大对象检测的融合
Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous Vehicles
论文作者
论文摘要
本文介绍了两种称为Ranet和Biranet的建筑。提出的架构旨在使用雷达信号数据以及RGB摄像机图像形成一个可靠的检测网络,即使在可变的照明和天气条件(例如雨,尘埃,灰尘,雾和其他情况下),该网络也有效地工作。首先,将雷达信息融合在功能提取器网络中。其次,雷达点用于生成引导锚。第三,提出了一种方法来改善区域建议网络目标。 Biranet在Nuscenes数据集上产生72.3/75.3%的平均AP/AR,这比具有特征金字塔网络(FFPN)的基本网络的性能要好。 RANET在同一数据集上给出了平均AP/AR的69.6/71.9%,这是可接受的性能。同样,对Biranet和Ranet都可以评估对噪声的鲁棒性。
This paper presents two variations of architecture referred to as RANet and BIRANet. The proposed architecture aims to use radar signal data along with RGB camera images to form a robust detection network that works efficiently, even in variable lighting and weather conditions such as rain, dust, fog, and others. First, radar information is fused in the feature extractor network. Second, radar points are used to generate guided anchors. Third, a method is proposed to improve region proposal network targets. BIRANet yields 72.3/75.3% average AP/AR on the NuScenes dataset, which is better than the performance of our base network Faster-RCNN with Feature pyramid network(FFPN). RANet gives 69.6/71.9% average AP/AR on the same dataset, which is reasonably acceptable performance. Also, both BIRANet and RANet are evaluated to be robust towards the noise.