论文标题

ESPN:极稀疏的修剪网络

ESPN: Extremely Sparse Pruned Networks

论文作者

Cho, Minsu, Joshi, Ameya, Hegde, Chinmay

论文摘要

深度神经网络通常是高度参数化的,禁止其在计算有限的系统中使用。但是,最近的一系列作品表明,通过识别训练前与显着权重相对应的神经元指标(或掩模)的一部分,可以大大降低深网的大小。我们证明,一种简单的迭代掩码发现方法可以实现对非常深网络的最新压缩。我们的算法代表了带有彩票类型方法的单个Shot网络修剪方法(例如SIP)之间的混合方法。我们在几个数据集上验证了我们的方法,并且在测试准确性和压缩率上都超过了几种现有的修剪方法。

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems. However, a line of recent works has shown that the size of deep networks can be considerably reduced by identifying a subset of neuron indicators (or mask) that correspond to significant weights prior to training. We demonstrate that an simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks. Our algorithm represents a hybrid approach between single shot network pruning methods (such as SNIP) with Lottery-Ticket type approaches. We validate our approach on several datasets and outperform several existing pruning approaches in both test accuracy and compression ratio.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源