论文标题

Micronet:朝着极低的拖鞋识别图像识别

MicroNet: Towards Image Recognition with Extremely Low FLOPs

论文作者

Li, Yunsheng, Chen, Yinpeng, Dai, Xiyang, Chen, Dongdong, Liu, Mengchen, Yuan, Lu, Liu, Zicheng, Zhang, Lei, Vasconcelos, Nuno

论文摘要

在本文中,我们介绍了Micronet,这是一种使用极低的计算成本(例如,在Imagenet分类上进行6个MFlops)的有效卷积神经网络。这种低成本网络在边缘设备上是高度期望的,但通常会遭受重大的性能降解。我们根据两个设计原理处理极低的拖鞋:(a)通过降低节点连接性来避免网络宽度的降低,以及(b)通过每层引入更复杂的非线性性能来补偿网络深度的降低。首先,我们提出了微依次的卷积,以将频道和深度卷积分配到低级矩阵中,以在通道数量和输入/输出连接之间进行良好的权衡。其次,我们提出了一个新的激活函数,称为“动态变速箱”,以提高非线性性,通过在输入特征映射及其圆形通道偏移之间提高多个动态融合。融合是动态的,因为它们的参数适用于输入。在微型卷积和动态变速器上,一群微型机构在低失败制度中的最新表现上获得了显着的性能增长。例如,Micronet-M1在具有12 MFlops的ImageNet分类方面达到61.1%的TOP-1准确性,表现优于MobilenetV3,提高了11.3%。

In this paper, we present MicroNet, which is an efficient convolutional neural network using extremely low computational cost (e.g. 6 MFLOPs on ImageNet classification). Such a low cost network is highly desired on edge devices, yet usually suffers from a significant performance degradation. We handle the extremely low FLOPs based upon two design principles: (a) avoiding the reduction of network width by lowering the node connectivity, and (b) compensating for the reduction of network depth by introducing more complex non-linearity per layer. Firstly, we propose Micro-Factorized convolution to factorize both pointwise and depthwise convolutions into low rank matrices for a good tradeoff between the number of channels and input/output connectivity. Secondly, we propose a new activation function, named Dynamic Shift-Max, to improve the non-linearity via maxing out multiple dynamic fusions between an input feature map and its circular channel shift. The fusions are dynamic as their parameters are adapted to the input. Building upon Micro-Factorized convolution and dynamic Shift-Max, a family of MicroNets achieve a significant performance gain over the state-of-the-art in the low FLOP regime. For instance, MicroNet-M1 achieves 61.1% top-1 accuracy on ImageNet classification with 12 MFLOPs, outperforming MobileNetV3 by 11.3%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源