论文标题

ROHNAS:一个神经体系结构搜索框架,具有相互障碍的稳健性和硬件效率的卷积和胶囊网络的硬件效率

RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks

论文作者

Marchisio, Alberto, Mrazek, Vojtech, Massa, Andrea, Bussolino, Beatrice, Martina, Maurizio, Shafique, Muhammad

论文摘要

神经体系结构搜索(NAS)算法旨在在给定的系统约束下为给定的应用找到有效的深神经网络(DNN)体系结构。 DNN是计算复杂的,并且容易受到对抗攻击的影响。为了解决多个设计目标,我们提出了Rohnas,Rohnas是一种新颖的NAS框架,共同优化了在专用硬件加速器上执行的DNN的对抗性和硬件效率。除传统的卷积DNN外,Rohnas还考虑了诸如胶囊网络等复杂类型的DNN类型。为了减少勘探时间,Rohnas分析并选择适当的对抗扰动值,以使每个数据集用于NAS流中。对多图形处理单元(GPU)的广泛评估 - 高性能计算(HPC)节点提供了一组帕累托最佳解决方案,利用上述设计目标之间的权衡。例如,CIFAR-10数据集的帕累托最佳DNN的精度为86.07%,而能量为38.63 MJ,记忆足迹为11.85 MIB,潜伏期为4.47毫秒。

Neural Architecture Search (NAS) algorithms aim at finding efficient Deep Neural Network (DNN) architectures for a given application under given system constraints. DNNs are computationally-complex as well as vulnerable to adversarial attacks. In order to address multiple design objectives, we propose RoHNAS, a novel NAS framework that jointly optimizes for adversarial-robustness and hardware-efficiency of DNNs executed on specialized hardware accelerators. Besides the traditional convolutional DNNs, RoHNAS additionally accounts for complex types of DNNs such as Capsule Networks. For reducing the exploration time, RoHNAS analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow. Extensive evaluations on multi - Graphics Processing Unit (GPU) - High Performance Computing (HPC) nodes provide a set of Pareto-optimal solutions, leveraging the tradeoff between the above-discussed design objectives. For example, a Pareto-optimal DNN for the CIFAR-10 dataset exhibits 86.07% accuracy, while having an energy of 38.63 mJ, a memory footprint of 11.85 MiB, and a latency of 4.47 ms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源