论文标题
NAS嘈杂:基于网络形态的神经架构搜索使用启发式搜索
Noisy Heuristics NAS: A Network Morphism based Neural Architecture Search using Heuristics
论文作者
论文摘要
基于网络形态的神经体系结构搜索(NAS)是最有效的方法之一,但是,知道何时添加新的神经元或删除非功能功能的方法通常留给黑盒增强学习模型。在本文中,我们提出了一种新的基于网络态度的NAS,称为Noisy Heurantics NAS,该NAS使用了从手动开发神经网络模型中学到的启发式方法,并受到生物神经元动力学的启发。首先,我们随机添加新的神经元,并修剪一些新神经元仅选择最佳拟合神经元。其次,我们使用隐藏单元与输入输出连接的数量的关系控制网络中的层数。我们的方法可以在线增加或降低模型的容量或非线性,该模型由用户指定了一些元参数。我们的方法在玩具数据集以及MNIST,CIFAR-10和CIFAR-100等实际数据集上概括了。性能与具有相似参数的手工设计架构Resnet-18相当。
Network Morphism based Neural Architecture Search (NAS) is one of the most efficient methods, however, knowing where and when to add new neurons or remove dis-functional ones is generally left to black-box Reinforcement Learning models. In this paper, we present a new Network Morphism based NAS called Noisy Heuristics NAS which uses heuristics learned from manually developing neural network models and inspired by biological neuronal dynamics. Firstly, we add new neurons randomly and prune away some to select only the best fitting neurons. Secondly, we control the number of layers in the network using the relationship of hidden units to the number of input-output connections. Our method can increase or decrease the capacity or non-linearity of models online which is specified with a few meta-parameters by the user. Our method generalizes both on toy datasets and on real-world data sets such as MNIST, CIFAR-10, and CIFAR-100. The performance is comparable to the hand-engineered architecture ResNet-18 with the similar parameters.