论文标题
学习层次结构意识到减少错误严重性的功能
Learning Hierarchy Aware Features for Reducing Mistake Severity
论文作者
论文摘要
标签层次结构通常作为生物分类法或语言数据集的一部分可用。几项工作利用这些作品来学习层次结构的功能,以改善分类器,以在维持或减少总体错误的同时犯有语义有意义的错误。在本文中,我们提出了一种学习层次结构意识特征(HAF)的新方法,该方法利用分类器在每个层次结构级别上的分类器受到约束,以生成与标签层次结构一致的预测。分类器的训练是通过最大程度地减少詹森 - 香农差异,并使用从细粒分类器获得的目标软标签。此外,我们采用了简单的几何损失,该损失限制了特征空间几何形状以捕获标签空间的语义结构。 HAF是一种训练时间方法,可以改善错误,同时保持TOP-1错误,从而解决了跨透明损失的问题,该问题将所有错误视为平等。我们在三个分层数据集上评估了HAF,并在Inaturalist-19和Cifar-100数据集上实现最新结果。源代码可从https://github.com/07agarg/haf获得
Label hierarchies are often available apriori as part of biological taxonomy or language datasets WordNet. Several works exploit these to learn hierarchy aware features in order to improve the classifier to make semantically meaningful mistakes while maintaining or reducing the overall error. In this paper, we propose a novel approach for learning Hierarchy Aware Features (HAF) that leverages classifiers at each level of the hierarchy that are constrained to generate predictions consistent with the label hierarchy. The classifiers are trained by minimizing a Jensen-Shannon Divergence with target soft labels obtained from the fine-grained classifiers. Additionally, we employ a simple geometric loss that constrains the feature space geometry to capture the semantic structure of the label space. HAF is a training time approach that improves the mistakes while maintaining top-1 error, thereby, addressing the problem of cross-entropy loss that treats all mistakes as equal. We evaluate HAF on three hierarchical datasets and achieve state-of-the-art results on the iNaturalist-19 and CIFAR-100 datasets. The source code is available at https://github.com/07Agarg/HAF