论文标题
在知识蒸馏中使用早期停止,在边缘设备上设计和培训轻型神经网络
Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge Distillation
论文作者
论文摘要
自动化功能提取能力和深度神经网络(DNN)的显着性能使其适合物联网(IoT)应用程序。但是,由于巨大的计算,能量和存储要求,将DNN部署在边缘设备上变得越来越高。本文提出了一种使用大型DNN设计和训练轻量级DNN的新型方法。该方法考虑可用的存储,处理速度和最大允许的处理时间,以执行边缘设备上的任务。我们提出了基于知识蒸馏的培训程序,以训练轻质DNN以实现足够的准确性。在对轻量级DNN的培训期间,我们引入了一种新颖的早期停止技术,该技术保留了网络资源。因此,加速训练程序。最后,我们介绍了经验和现实世界的评估,以使用各种边缘设备在不同约束下验证所提出的方法的有效性。
Automated feature extraction capability and significant performance of Deep Neural Networks (DNN) make them suitable for Internet of Things (IoT) applications. However, deploying DNN on edge devices becomes prohibitive due to the colossal computation, energy, and storage requirements. This paper presents a novel approach for designing and training lightweight DNN using large-size DNN. The approach considers the available storage, processing speed, and maximum allowable processing time to execute the task on edge devices. We present a knowledge distillation based training procedure to train the lightweight DNN to achieve adequate accuracy. During the training of lightweight DNN, we introduce a novel early halting technique, which preserves network resources; thus, speedups the training procedure. Finally, we present the empirically and real-world evaluations to verify the effectiveness of the proposed approach under different constraints using various edge devices.