论文标题

修剪非常深的神经网络渠道以提高推理

Pruning Very Deep Neural Network Channels for Efficient Inference

论文作者

He, Yihui

论文摘要

在本文中,我们引入了一种新的渠道修剪方法,以加速非常深的卷积神经网络。给定经过训练的CNN模型,我们提出了一种迭代的两步算法,通过基于套索回归的通道选择和最小的正方形重建有效地修剪每一层。我们将此算法进一步概括为多层和多分支机构案例。我们的方法减少了累积的误差,并增强了与各种体系结构的兼容性。我们修剪的VGG-16可实现最新的结果5倍加速,仅增加0.3%的误差。更重要的是,我们的方法能够加速现代网络,例如Resnet,Xception,并且仅在2倍加速下仅遭受1.4%,精度损失1.0%,这很重要。我们的代码已公开可用。

In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhances the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5x speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2x speed-up respectively, which is significant. Our code has been made publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源