论文标题

使用连续的STDP学习的尖峰神经网络融合基于事件的相机和雷达用于猛击

Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning

论文作者

Safa, Ali, Verbelen, Tim, Ocket, Ilja, Bourdoux, André, Sahli, Hichem, Catthoor, Francky, Gielen, Georges

论文摘要

这项工作提出了一个首先的大满贯体系结构,以融合了基于事件的相机和用于无人机导航的频率调制连续波(FMCW)雷达。每个传感器都通过在大脑中观察到的,通过具有连续尖峰依赖性可塑性(STDP)学习的生物启发的尖峰神经网络(SNN)处理。与大多数基于学习的SLAM Systems%的相反,A)需要获取必须执行导航的代表性数据集,并且b)b)需要一个离线训练阶段,我们的方法不需要任何离线训练阶段,而是SNN不断从飞行中通过STDP从FLY上的输入数据中学习功能。同时,SNN输出用作循环闭合检测和映射校正的功能描述符。我们进行了许多实验,以根据最新的RGB方法对系统进行基准测试,并在强烈的照明变化下证明了DVS-RADAR SLAM方法的鲁棒性。

This work proposes a first-of-its-kind SLAM architecture fusing an event-based camera and a Frequency Modulated Continuous Wave (FMCW) radar for drone navigation. Each sensor is processed by a bio-inspired Spiking Neural Network (SNN) with continual Spike-Timing-Dependent Plasticity (STDP) learning, as observed in the brain. In contrast to most learning-based SLAM systems%, which a) require the acquisition of a representative dataset of the environment in which navigation must be performed and b) require an off-line training phase, our method does not require any offline training phase, but rather the SNN continuously learns features from the input data on the fly via STDP. At the same time, the SNN outputs are used as feature descriptors for loop closure detection and map correction. We conduct numerous experiments to benchmark our system against state-of-the-art RGB methods and we demonstrate the robustness of our DVS-Radar SLAM approach under strong lighting variations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源