论文标题

为什么我们需要稀疏需要随机修剪

Why Random Pruning Is All We Need to Start Sparse

论文作者

Gadhikar, Advait, Mukherjee, Sohom, Burkholz, Rebekka

论文摘要

随机掩模定义了令人惊讶的有效稀疏神经网络模型,如经验所示。最终的稀疏网络通常可以与密集的架构和最先进的彩票修剪算法竞争,即使它们不依赖计算昂贵的临床培训迭代,并且最初可以在没有大量的计算费用的情况下进行绘制。我们提供了一个理论上的解释,即如果随机掩码在反向稀疏中的对数因素$ 1 / \ log(1 / \ text {sparsity})$中,如何近似任意目标网络。至少对于三层随机网络,这种过度参数化因子是必需的,这阐明了观察到的较高稀疏性随机网络的降解性能。但是,在中等到高稀疏度的水平下,我们的结果表明,稀疏网络包含在随机源网络中,因此任何密集到较密集的训练方案都可以通过将搜索限制为固定的随机掩码,将任何密集到相对的训练方案转变为计算上更有效的稀疏到SPARSE。我们证明了这种方法在实验中对于不同修剪方法的可行性,并提出了对随机源网络初始层稀疏比的特别有效选择。作为一种特殊情况,我们在理论上和实验上表明随机源网络还包含强大的彩票。

Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense architectures and state-of-the-art lottery ticket pruning algorithms, even though they do not rely on computationally expensive prune-train iterations and can be drawn initially without significant computational overhead. We offer a theoretical explanation of how random masks can approximate arbitrary target networks if they are wider by a logarithmic factor in the inverse sparsity $1 / \log(1/\text{sparsity})$. This overparameterization factor is necessary at least for 3-layer random networks, which elucidates the observed degrading performance of random networks at higher sparsity. At moderate to high sparsity levels, however, our results imply that sparser networks are contained within random source networks so that any dense-to-sparse training scheme can be turned into a computationally more efficient sparse-to-sparse one by constraining the search to a fixed random mask. We demonstrate the feasibility of this approach in experiments for different pruning methods and propose particularly effective choices of initial layer-wise sparsity ratios of the random source network. As a special case, we show theoretically and experimentally that random source networks also contain strong lottery tickets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源