论文标题
重新归一化的稀疏神经网络修剪
Renormalized Sparse Neural Network Pruning
论文作者
论文摘要
大型神经网络被大量参数化。之所以这样做,是因为它改善了最佳培训。但是,一旦训练了网络,这意味着许多参数可以被归零或修剪,从而产生同等的稀疏神经网络。我们建议将稀疏神经网络重新归一化,以提高准确性。我们证明我们的方法的错误作为网络参数群集或浓缩物收敛到零。我们证明,如果不重新规定,误差通常不会收敛到零。我们在现实世界数据集MNIST,Fashion MNIST和CIFAR-10上尝试了我们的方法,并通过重新归一化与标准修剪确认了准确性的巨大提高。
Large neural networks are heavily over-parameterized. This is done because it improves training to optimality. However once the network is trained, this means many parameters can be zeroed, or pruned, leaving an equivalent sparse neural network. We propose renormalizing sparse neural networks in order to improve accuracy. We prove that our method's error converges to zero as network parameters cluster or concentrate. We prove that without renormalizing, the error does not converge to zero in general. We experiment with our method on real world datasets MNIST, Fashion MNIST, and CIFAR-10 and confirm a large improvement in accuracy with renormalization versus standard pruning.