论文标题

质地引导的显着蒸馏,以进行无监督的显着对象检测

Texture-guided Saliency Distilling for Unsupervised Salient Object Detection

论文作者

Zhou, Huajun, Qiao, Bo, Yang, Lingxiao, Lai, Jianhuang, Xie, Xiaohua

论文摘要

基于深度学习的无监督对象检测(USOD)主要依赖于传统手工艺方法或预训练的网络产生的嘈杂的显着性伪标签。为了应对嘈杂的标签问题,一类方法仅着眼于具有可靠标签的简单样品,但忽略了硬样品中有价值的知识。在本文中,我们提出了一种新型的USOD方法,以从简单和硬样品中挖掘出丰富而准确的显着性知识。首先,我们提出了一种信心意识的显着性蒸馏(CSD)策略,该策略得分以样本的信心为条件,该样本指导该模型逐渐从简单的样本到硬样品蒸馏出显着性知识。其次,我们提出了一种边界感知的纹理匹配(BTM)策略,以通过匹配预测边界周围的纹理来完善嘈杂标签的边界。 RGB,RGB-D,RGB-T和视频草皮基准的广泛实验证明我们的方法可以实现最新的USOD性能。

Deep Learning-based Unsupervised Salient Object Detection (USOD) mainly relies on the noisy saliency pseudo labels that have been generated from traditional handcraft methods or pre-trained networks. To cope with the noisy labels problem, a class of methods focus on only easy samples with reliable labels but ignore valuable knowledge in hard samples. In this paper, we propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples. First, we propose a Confidence-aware Saliency Distilling (CSD) strategy that scores samples conditioned on samples' confidences, which guides the model to distill saliency knowledge from easy samples to hard samples progressively. Second, we propose a Boundary-aware Texture Matching (BTM) strategy to refine the boundaries of noisy labels by matching the textures around the predicted boundary. Extensive experiments on RGB, RGB-D, RGB-T, and video SOD benchmarks prove that our method achieves state-of-the-art USOD performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源