论文标题
关于知识蒸馏与其他技术的正交性:从合奏的角度来看
On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective
论文作者
论文摘要
要将最先进的神经网络放在实际使用中,有必要设计一个模型,该模型在测试集中的资源消耗和性能之间取决于良好的权衡。许多研究人员和工程师正在开发能够更有效地培训或设计模型的方法。开发有效的模型包括几种策略,例如网络体系结构搜索,修剪,量化,知识蒸馏,利用廉价的卷积,正则化,还包括任何导致更好的性能资源折衷的工艺。将这些技术结合在一起时,如果绩效提高的来源不与他人冲突,那将是理想的选择。我们将此属性称为模型效率的正交性。在本文中,我们专注于知识蒸馏,并证明知识蒸馏方法在分析和经验上都是与其他提高效率提高方法正交的。在分析上,我们声称知识蒸馏的功能类似于集合方法,Bootstrap聚合。从知识蒸馏的隐式数据增强特性的角度提供了这种分析解释。从经验上讲,我们将知识蒸馏验证为有效部署有效神经网络的有力设备,并引入了有效地与其他方法集成的方法。
To put a state-of-the-art neural network to practical use, it is necessary to design a model that has a good trade-off between the resource consumption and performance on the test set. Many researchers and engineers are developing methods that enable training or designing a model more efficiently. Developing an efficient model includes several strategies such as network architecture search, pruning, quantization, knowledge distillation, utilizing cheap convolution, regularization, and also includes any craft that leads to a better performance-resource trade-off. When combining these technologies together, it would be ideal if one source of performance improvement does not conflict with others. We call this property as the orthogonality in model efficiency. In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically. Analytically, we claim that knowledge distillation functions analogous to a ensemble method, bootstrap aggregating. This analytical explanation is provided from the perspective of implicit data augmentation property of knowledge distillation. Empirically, we verify knowledge distillation as a powerful apparatus for practical deployment of efficient neural network, and also introduce ways to integrate it with other methods effectively.