论文标题
团体美白:平衡学习效率和代表性的能力
Group Whitening: Balancing Learning Efficiency and Representational Capacity
论文作者
论文摘要
分批归一化(BN)是一种重要的技术,通常将其纳入深度学习模型中,以在迷你批次内执行标准化。 BN在提高模型的学习效率方面的优点可以通过应用美白来进一步扩大,而在估计推理的人口统计学方面的缺点可以通过组归一化(GN)避免。本文提出了团体美白(GW),该群体利用了美白操作的优势,并避免了迷你批次内的正常化缺点。此外,我们通过标准化分析了对特征施加的约束,并显示批处理大小(组号)如何从模型的代表能力的角度影响批处理(组)标准化网络的性能。该分析为实践中应用GW提供了理论指导。最后,我们将提出的GW应用于重新连接和重新连接体系结构,并在成像网和可可基准上进行实验。结果表明,GW始终提高不同体系结构的性能,绝对增长$ 1.02 \%$ $ \ sim $ \ sim $ $ $ $ 1.49 \%$ $ 1.49 \%$在Imagenet上的top-1精度和$ 1.82 \%$ $ \ sim $ $ $ $ $ 3.21 \%$ 3.21 \%$ $ $ $ 3.21 \%$ $ $ $ $ $ 3.21 \%$ $ $ $ $ $ $ $ $ 3.21 \%$ $ $ $ $。
Batch normalization (BN) is an important technique commonly incorporated into deep learning models to perform standardization within mini-batches. The merits of BN in improving a model's learning efficiency can be further amplified by applying whitening, while its drawbacks in estimating population statistics for inference can be avoided through group normalization (GN). This paper proposes group whitening (GW), which exploits the advantages of the whitening operation and avoids the disadvantages of normalization within mini-batches. In addition, we analyze the constraints imposed on features by normalization, and show how the batch size (group number) affects the performance of batch (group) normalized networks, from the perspective of model's representational capacity. This analysis provides theoretical guidance for applying GW in practice. Finally, we apply the proposed GW to ResNet and ResNeXt architectures and conduct experiments on the ImageNet and COCO benchmarks. Results show that GW consistently improves the performance of different architectures, with absolute gains of $1.02\%$ $\sim$ $1.49\%$ in top-1 accuracy on ImageNet and $1.82\%$ $\sim$ $3.21\%$ in bounding box AP on COCO.