论文标题
对基于事件的对象跟踪的视网膜事件的对象运动估算的端到端学习
End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking
论文作者
论文摘要
事件摄像机是异步的生物启发的视觉传感器,在计算机视觉和人工智能中显示出很大的潜力。但是,将事件摄像机应用于对象级运动估计或跟踪仍处于起步阶段。这项工作背后的主要思想是提出一个新颖的深神经网络,以学习和回归基于事件的对象跟踪的参数对象级运动/变换模型。为了实现这一目标,我们提出了一个与线性时间衰减(TSLTD)表示的同步时间表面,该表示有效地将异步视网膜事件的时空信息编码为具有清晰运动模式的TSLTD帧。我们将TSLTD框架的顺序馈送到新的视网膜运动回归网络(RMRNET)中,以执行端到端5-DOF对象运动回归。将我们的方法与基于常规摄像机或事件摄像机的最新对象跟踪方法进行了比较。实验结果表明,我们方法在处理各种具有挑战性的环境(例如快速运动和低照明条件)方面的优越性。
Event cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in computer vision and artificial intelligence. However, the application of event cameras to object-level motion estimation or tracking is still in its infancy. The main idea behind this work is to propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking. To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which effectively encodes the spatio-temporal information of asynchronous retinal events into TSLTD frames with clear motion patterns. We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform an end-to-end 5-DoF object motion regression. Our method is compared with state-of-the-art object tracking methods, that are based on conventional cameras or event cameras. The experimental results show the superiority of our method in handling various challenging environments such as fast motion and low illumination conditions.