论文标题

语义分割的跨图像关系蒸馏

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

论文作者

Yang, Chuanguang, Zhou, Helong, An, Zhulin, Jiang, Xue, Xu, Yongjun, Zhang, Qian

论文摘要

语义细分的当前知识蒸馏(KD)方法通常指导学生模仿从单个数据样本中产生的老师的结构化信息。但是,他们忽略了对KD有价值的各种图像中像素之间的全球语义关系。本文提出了一种新型的跨图像关系KD(CIRKD),该关系重点是在整个图像之间转移结构化像素到像素和像素到区域的关系。动机是,一个好的教师网络可以根据全球像素依赖性来构建结构良好的特征空间。 CIRKD使学生模仿老师的结构化语义关系更好,从而改善了细分表现。 CAMVID和PASCAL VOC数据集对城市景观的实验结果证明了我们针对最新蒸馏方法的拟议方法的有效性。该代码可在https://github.com/winycg/cirkd上找到。

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples. However, they ignore the global semantic relations among pixels across various images that are valuable for KD. This paper proposes a novel Cross-Image Relational KD (CIRKD), which focuses on transferring structured pixel-to-pixel and pixel-to-region relations among the whole images. The motivation is that a good teacher network could construct a well-structured feature space in terms of global pixel dependencies. CIRKD makes the student mimic better structured semantic relations from the teacher, thus improving the segmentation performance. Experimental results over Cityscapes, CamVid and Pascal VOC datasets demonstrate the effectiveness of our proposed approach against state-of-the-art distillation methods. The code is available at https://github.com/winycg/CIRKD.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源