论文标题
Hynna:使用混合神经网络结构改善基于神经形态视觉传感器的监视的性能
HyNNA: Improved Performance for Neuromorphic Vision Sensor based Surveillance using Hybrid Neural Network Architecture
论文作者
论文摘要
视频事物(IOVT)域中的应用程序在功率和区域方面具有非常严格的限制。尽管神经形态视觉传感器(NVS)可能比该域中的传统成像器具有优势,但现有的NVS系统要么不符合功率约束,要么没有显示端到端的系统性能。为了解决这一问题,我们通过使用形态图像处理算法进行区域建议来改进一种最近提出的混合事件框架方法,并通过探索各种卷积神经网络(CNN)体系结构来解决对象检测和分类的低功率要求。具体而言,我们比较了从我们的对象检测框架获得的结果与最先进的低功率NVS监视系统的结果,并显示出从63.1%的82.16%的提高精度。此外,我们表明,使用多个位不能提高准确性,因此,系统设计人员只能使用单位事件极性信息来节省功率和区域。此外,我们还探索了用于对象分类的CNN体系结构空间,并展示了使用较小的内存和算术操作来取消精确度的有用见解。
Applications in the Internet of Video Things (IoVT) domain have very tight constraints with respect to power and area. While neuromorphic vision sensors (NVS) may offer advantages over traditional imagers in this domain, the existing NVS systems either do not meet the power constraints or have not demonstrated end-to-end system performance. To address this, we improve on a recently proposed hybrid event-frame approach by using morphological image processing algorithms for region proposal and address the low-power requirement for object detection and classification by exploring various convolutional neural network (CNN) architectures. Specifically, we compare the results obtained from our object detection framework against the state-of-the-art low-power NVS surveillance system and show an improved accuracy of 82.16% from 63.1%. Moreover, we show that using multiple bits does not improve accuracy, and thus, system designers can save power and area by using only single bit event polarity information. In addition, we explore the CNN architecture space for object classification and show useful insights to trade-off accuracy for lower power using lesser memory and arithmetic operations.