论文标题
Coopernaut:端到端的驾驶与联网车辆的合作感
COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
论文作者
论文摘要
在过去的几年中,自动驾驶汽车的光学传感器和学习算法已经大大提高。但是,当今自动驾驶汽车的可靠性受到有限的视线感应能力以及在处理极端情况下数据驱动方法的脆弱性的阻碍。随着电信技术的最新发展,与车辆到车辆通信的合作感已成为增强危险或紧急情况下自动驾驶的有前途的范式。我们介绍了Coopernaut,这是一种端到端的学习模型,该模型使用跨车辆感知进行基于视觉的合作驾驶。我们的模型将LiDAR信息编码为基于紧凑的点表示,这些表示可以通过逼真的无线通道在车辆之间传输。为了评估我们的模型,我们开发了AutoCastsim,这是一个由网络增强的驾驶模拟框架,具有示例性事故场景。我们对AutoCastsim的实验表明,在这些挑战性的驾驶情况下,我们的合作感知驾驶模型的平均成功率比以eg中心驾驶模型的比例提高了40%,并且比先前的工作V2VNET要小5倍。 Coopernaut和AutoCastsim可在https://ut-aut-autin-rpl.github.io/coopernaut/上找到。
Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes LiDAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop AutoCastSim, a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving models in these challenging driving situations and a 5 times smaller bandwidth requirement than prior work V2VNet. COOPERNAUT and AUTOCASTSIM are available at https://ut-austin-rpl.github.io/Coopernaut/.