论文标题

部分可观测时空混沌系统的无模型预测

EAPruning: Evolutionary Pruning for Vision Transformers and CNNs

论文作者

Li, Qingyuan, Zhang, Bo, Chu, Xiangxiang

论文摘要

结构化的修剪大大减轻了在资源受限环境中的大型神经网络的部署。但是,当前的方法要么涉及强大的领域专业知识,需要额外的高参数调整,要么仅限于特定类型的网络,从而阻止了普遍存在的工业应用。在本文中,我们采用了一种简单有效的方法,可以轻松地应用于视觉变压器和卷积神经网络。具体而言,我们将修剪视为通过重建技术继承权重的子网络结构的进化过程。我们为RESNET50和MOBILENETV1实现了50%的FLOPS降低,分别导致1.37倍和1.34倍的速度。对于Deit碱,我们达到了近40%的掉落和1.4倍的速度。我们的代码将提供。

Structured pruning greatly eases the deployment of large neural networks in resource-constrained environments. However, current methods either involve strong domain expertise, require extra hyperparameter tuning, or are restricted only to a specific type of network, which prevents pervasive industrial applications. In this paper, we undertake a simple and effective approach that can be easily applied to both vision transformers and convolutional neural networks. Specifically, we consider pruning as an evolution process of sub-network structures that inherit weights through reconstruction techniques. We achieve a 50% FLOPS reduction for ResNet50 and MobileNetV1, leading to 1.37x and 1.34x speedup respectively. For DeiT-Base, we reach nearly 40% FLOPs reduction and 1.4x speedup. Our code will be made available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源