论文标题

aimotive数据集:一个多模式数据集,用于具有远距离感知的强大自动驾驶

aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception

论文作者

Matuszka, Tamás, Barton, Iván, Butykai, Ádám, Hajas, Péter, Kiss, Dávid, Kovács, Domonkos, Kunsági-Máté, Sándor, Lengyel, Péter, Németh, Gábor, Pető, Levente, Ribli, Dezső, Szeghy, Dávid, Vajna, Szabolcs, Varga, Bálint

论文摘要

自动驾驶是计算机视觉研究社区中的一个流行研究领域。由于自动驾驶汽车非常关键安全,因此确保鲁棒性对于现实世界的部署至关重要。虽然几个公共多模式数据集可访问,但它们主要包括两个传感器模式(相机,激光镜头),这些模式不适合不利的天气。此外,它们缺乏远程注释,因此很难训练成为自动驾驶汽车高速公路助理功能的神经网络。因此,我们引入了一个多模式数据集,用于具有远距离感知的强大自主驾驶。该数据集由176个场景组成,带有同步和校准的LIDAR,相机和雷达传感器,涵盖了360度视野。在白天,夜晚和雨中,收集的数据在高速公路,城市和郊区被捕获,并带有3D边界盒的注释,并在整个框架上具有一致的标识符。此外,我们训练了用于3D对象检测的单峰和多模式基线模型。数据可在\ url {https://github.com/aimotive/aimotive_dataset}中找到。

Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源