论文标题
LIDAR-AS-CAMERA用于端到端驾驶
LiDAR-as-Camera for End-to-End Driving
论文作者
论文摘要
任何自主驾驶系统的核心任务是将感觉输入转换为驾驶命令。在端到端驾驶中,这是通过神经网络实现的,其中一个或多个摄像头是最常用的输入和低级驾驶命令,例如转向角,作为输出。但是,在模拟中显示了深度感应,以使端到端驾驶任务更加容易。在真实的汽车上,由于难以获得良好的空间和时间对齐传感器,将深度和视觉信息组合起来可能具有挑战性。为了减轻对齐问题,驱动器激光雷达可以以深度,强度和环境辐射通道输出环绕式激光镜头图像。这些测量值来自同一传感器,使它们在时间和空间中完全排列。我们证明,这种LIDAR形象足以完成实际车程的行驶任务,并且在测试条件下至少对基于摄像机的模型执行,并且需要在需要推广到新的天气条件时差异。在研究的第二个方向上,我们揭示了非政策预测序列的时间平滑度与实际的上驾驶能力同样相关,与常用的平均绝对误差一样。
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving command, e.g. steering angle, as output. However, depth-sensing has been shown in simulation to make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging, due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR-images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. We demonstrate that such LiDAR-images are sufficient for the real-car road-following task and perform at least equally to camera-based models in the tested conditions, with the difference increasing when needing to generalize to new weather conditions. In the second direction of study, we reveal that the temporal smoothness of off-policy prediction sequences correlates equally well with actual on-policy driving ability as the commonly used mean absolute error.