论文标题

CC损失:图像分类的通道相关损失

CC-Loss: Channel Correlation Loss For Image Classification

论文作者

Song, Zeyu, Chang, Dongliang, Ma, Zhanyu, Li, Xiaoxu, Tan, Zheng-Hua

论文摘要

损失函数是深度学习模型中的关键组成部分。分类的常用损失函数是跨熵损失,这是信息理论在分类问题中的简单而有效的应用。基于这种损失,已经提出了许多其他损失函数,即〜\ emph {例如。但是,这些损失功能无法考虑特征分布与模型结构之间的连接。为了解决这个问题,我们提出了一个通道相关损失(CC-loss),该损失能够限制类和渠道之间的特定关系,并维持阶层内和阶层间的可分离性。 CC-Loss使用通道注意模块在训练阶段为每个样本引起特征的通道注意。接下来,计算欧几里得距离矩阵,以使与同一类相关的通道注意向量变得相同并增加不同类别之间的差异。最后,我们获得了一个具有良好类内紧凑性和类间分离性的功能嵌入。实验结果表明,在三个图像分类数据集中,使用建议的CC-Loss胜过最先进的损失函数的两个不同的骨干模型。

The loss function is a key component in deep learning models. A commonly used loss function for classification is the cross entropy loss, which is a simple yet effective application of information theory for classification problems. Based on this loss, many other loss functions have been proposed,~\emph{e.g.}, by adding intra-class and inter-class constraints to enhance the discriminative ability of the learned features. However, these loss functions fail to consider the connections between the feature distribution and the model structure. Aiming at addressing this problem, we propose a channel correlation loss (CC-Loss) that is able to constrain the specific relations between classes and channels as well as maintain the intra-class and the inter-class separability. CC-Loss uses a channel attention module to generate channel attention of features for each sample in the training stage. Next, an Euclidean distance matrix is calculated to make the channel attention vectors associated with the same class become identical and to increase the difference between different classes. Finally, we obtain a feature embedding with good intra-class compactness and inter-class separability.Experimental results show that two different backbone models trained with the proposed CC-Loss outperform the state-of-the-art loss functions on three image classification datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源