论文标题
结构修剪通过潜伏期攻击
Structural Pruning via Latency-Saliency Knapsack
论文作者
论文摘要
结构修剪可以简化网络体系结构并提高推理速度。我们提出了硬件意识的延迟修剪(HALP),以将结构修剪作为全球资源分配优化问题制定,旨在最大程度地提高准确性,同时限制针对目标设备的预定预算下的延迟。对于滤波器的重要性排名,HALP利用延迟查找表来跟踪降低潜力和全球显着性评分以衡量准确性下降。在修剪过程中,这两个指标都可以非常有效地评估,从而使我们能够在奖励最大化问题下重新对全球结构修剪,给定目标限制。这使问题可以通过我们的增强背包求解器解决,从而使HALP能够超越修剪功效和准确性效率折衷方面的先前工作。我们在不同平台上对分类和检测任务的HALP,ImageNet和VOC数据集的不同网络进行了检查。特别是,对于ImageNet上的Resnet-50/-101修剪,HALP将网络吞吐量提高了$ 1.60 \ times $/$ 1.90 \ times $,$+0.3 \%$/$/$ - 0.2 \%$ $ $ - 0.2 \%$ top-1的准确度更改。对于SSD修剪VOC,HALP将吞吐量提高了$ 1.94 \ times $,仅$ 0.56 $地图下降。 HALP始终胜过先前的艺术,有时是很大的边缘。 https://halp-neurips.github.io/的项目页面。
Structural pruning can simplify network architecture and improve inference speed. We propose Hardware-Aware Latency Pruning (HALP) that formulates structural pruning as a global resource allocation optimization problem, aiming at maximizing the accuracy while constraining latency under a predefined budget on targeting device. For filter importance ranking, HALP leverages latency lookup table to track latency reduction potential and global saliency score to gauge accuracy drop. Both metrics can be evaluated very efficiently during pruning, allowing us to reformulate global structural pruning under a reward maximization problem given target constraint. This makes the problem solvable via our augmented knapsack solver, enabling HALP to surpass prior work in pruning efficacy and accuracy-efficiency trade-off. We examine HALP on both classification and detection tasks, over varying networks, on ImageNet and VOC datasets, on different platforms. In particular, for ResNet-50/-101 pruning on ImageNet, HALP improves network throughput by $1.60\times$/$1.90\times$ with $+0.3\%$/$-0.2\%$ top-1 accuracy changes, respectively. For SSD pruning on VOC, HALP improves throughput by $1.94\times$ with only a $0.56$ mAP drop. HALP consistently outperforms prior art, sometimes by large margins. Project page at https://halp-neurips.github.io/.