论文标题

基于合作初始化的深度神经网络培训

Cooperative Initialization based Deep Neural Network Training

论文作者

Singh, Pravendra, Varshney, Munender, Namboodiri, Vinay P.

论文摘要

研究人员提出了各种激活功能。这些激活功能有助于深层网络学习非线性行为,并对训练动态和任务性能产生重大影响。这些激活的性能还取决于权重参数的初始状态,即不同的初始状态会导致网络性能的差异。在本文中,我们提出了一个合作初始化,用于使用Relu激活功能训练深网,以提高网络性能。我们的方法在最初的几个时期使用多个激活功能,以在训练网络时更新所有权重参数。这些激活功能合作以克服权重参数的更新中的缺点,实际上,这些功能实际上学习了更好的“功能表示”,并稍后提高网络性能。基于合作初始化的培训还有助于减少过度拟合的问题,并且不会增加参数的数量,最终模型中的推理(测试)时间,同时提高性能。实验表明,我们的方法表现优于各种基准,同时在分类和检测等各种任务上表现良好。使用我们的方法训练的模型的前1个分类精度提高了2.8%的VGG-16和2.1%的RESNET-56在CIFAR-100数据集上提高了2.1%。

Researchers have proposed various activation functions. These activation functions help the deep network to learn non-linear behavior with a significant effect on training dynamics and task performance. The performance of these activations also depends on the initial state of the weight parameters, i.e., different initial state leads to a difference in the performance of a network. In this paper, we have proposed a cooperative initialization for training the deep network using ReLU activation function to improve the network performance. Our approach uses multiple activation functions in the initial few epochs for the update of all sets of weight parameters while training the network. These activation functions cooperate to overcome their drawbacks in the update of weight parameters, which in effect learn better "feature representation" and boost the network performance later. Cooperative initialization based training also helps in reducing the overfitting problem and does not increase the number of parameters, inference (test) time in the final model while improving the performance. Experiments show that our approach outperforms various baselines and, at the same time, performs well over various tasks such as classification and detection. The Top-1 classification accuracy of the model trained using our approach improves by 2.8% for VGG-16 and 2.1% for ResNet-56 on CIFAR-100 dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源