论文标题

通过动态视觉传感器提取车道标记的结构感知网络

Structure-Aware Network for Lane Marker Extraction with Dynamic Vision Sensor

论文作者

Cheng, Wensheng, Luo, Hao, Yang, Wen, Yu, Lei, Li, Wei

论文摘要

车道标记提取是自动驾驶的基本但必要的任务。尽管过去几年见证了通过深度学习模型提取车道标记的重大进展,但它们都针对由基于框架的相机产生的普通RGB图像,这在极端情况下限制了它们的性能,例如巨大的照明变化。为了解决此问题,我们介绍了动态视觉传感器(DVS),这是一种基于事件的传感器,以实现巷道标记提取任务,并构建一个高分辨率DVS数据集,以用于泳道标记。我们收集原始事件数据,并生成5,424个DVS图像,分辨率为1280 $ \ times $ 800像素,这是所有可用的所有DVS数据集中最高的像素。所有图像均以多级语义分割格式注释。然后,我们建议在DVS图像中提取泳道标记的结构感知网络。它可以通过多向切片卷积全面捕获定向信息。我们使用此数据集上的其他最先进的车道提取模型来评估我们的网络。实验结果表明,我们的方法表现优于其他竞争对手。该数据集可公开可用,包括原始事件数据,累积图像和标签。

Lane marker extraction is a basic yet necessary task for autonomous driving. Although past years have witnessed major advances in lane marker extraction with deep learning models, they all aim at ordinary RGB images generated by frame-based cameras, which limits their performance in extreme cases, like huge illumination change. To tackle this problem, we introduce Dynamic Vision Sensor (DVS), a type of event-based sensor to lane marker extraction task and build a high-resolution DVS dataset for lane marker extraction. We collect the raw event data and generate 5,424 DVS images with a resolution of 1280$\times$800 pixels, the highest one among all DVS datasets available now. All images are annotated with multi-class semantic segmentation format. We then propose a structure-aware network for lane marker extraction in DVS images. It can capture directional information comprehensively with multidirectional slice convolution. We evaluate our proposed network with other state-of-the-art lane marker extraction models on this dataset. Experimental results demonstrate that our method outperforms other competitors. The dataset is made publicly available, including the raw event data, accumulated images and labels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源