论文标题
标签语义知识蒸馏,无偏见的场景图生成
Label Semantic Knowledge Distillation for Unbiased Scene Graph Generation
论文作者
论文摘要
场景图生成(SGG)任务旨在在给定图像中检测所有对象及其成对的视觉关系。尽管SGG在过去几年中取得了显着的进步,但几乎所有现有的SGG模型都遵循相同的训练范式:他们将SGG中的对象和谓词分类视为单标签分类问题,而地面真实性是一个hot目标标签。但是,这种普遍的训练范式忽略了当前SGG数据集的两个特征:1)对于正样本,某些特定的主题对象实例可能具有多个合理的谓词。 2)对于负样本,有许多缺失的注释。无论这两个特征如何,SGG模型都很容易被混淆并做出错误的预测。为此,我们为无偏SGG提出了一种新颖的模型不合命相的标签语义知识蒸馏(LS-KD)。具体而言,LS-KD通过将预测的标签语义分布(LSD)与其原始的单热目标标签融合来动态生成每个主题对象实例的软标签。 LSD反映了此实例和多个谓词类别之间的相关性。同时,我们提出了两种不同的策略来预测LSD:迭代自我KD和同步自我KD。大量的消融和对三个SGG任务的结果证明了我们拟议的LS-KD的优势和一般性,这可以始终如一地实现不同谓词类别之间的不错的权衡绩效。
The Scene Graph Generation (SGG) task aims to detect all the objects and their pairwise visual relationships in a given image. Although SGG has achieved remarkable progress over the last few years, almost all existing SGG models follow the same training paradigm: they treat both object and predicate classification in SGG as a single-label classification problem, and the ground-truths are one-hot target labels. However, this prevalent training paradigm has overlooked two characteristics of current SGG datasets: 1) For positive samples, some specific subject-object instances may have multiple reasonable predicates. 2) For negative samples, there are numerous missing annotations. Regardless of the two characteristics, SGG models are easy to be confused and make wrong predictions. To this end, we propose a novel model-agnostic Label Semantic Knowledge Distillation (LS-KD) for unbiased SGG. Specifically, LS-KD dynamically generates a soft label for each subject-object instance by fusing a predicted Label Semantic Distribution (LSD) with its original one-hot target label. LSD reflects the correlations between this instance and multiple predicate categories. Meanwhile, we propose two different strategies to predict LSD: iterative self-KD and synchronous self-KD. Extensive ablations and results on three SGG tasks have attested to the superiority and generality of our proposed LS-KD, which can consistently achieve decent trade-off performance between different predicate categories.