论文标题
UNIDAFORMER:统一域自适应泛型分割变压器通过层次掩码校准
UniDAformer: Unified Domain Adaptive Panoptic Segmentation Transformer via Hierarchical Mask Calibration
论文作者
论文摘要
域自适应综合分段旨在通过利用一个或多个相关源域中的现成注释数据来减轻数据注释挑战。但是,现有研究采用两个单独的网络,例如分割和语义分割,从而导致网络参数过多,以及复杂和计算密集的培训和推理过程。我们设计了UnidaFormer,这是一种简单的统一域自适应式分割变压器,可以在单个网络中同时实现域自适应实例分割和语义分割。 UnidaFormer引入了层次掩盖校准(HMC),该层面校准通过在线自我训练纠正了区域,超像素和像素的水平不准确的预测。它具有三个独特的功能:1)它可以启用统一的域自适应圆形适应性; 2)它可以缓解虚假预测并有效地改善域的自适应泛型分割; 3)它是端到端的训练,并具有更简单的培训和推理管道。对多个公共基准测试的广泛实验表明,与最先进的艺术相比,UnidaFormer实现了优越的域自适应泛型分割。
Domain adaptive panoptic segmentation aims to mitigate data annotation challenge by leveraging off-the-shelf annotated data in one or multiple related source domains. However, existing studies employ two separate networks for instance segmentation and semantic segmentation which lead to excessive network parameters as well as complicated and computationally intensive training and inference processes. We design UniDAformer, a unified domain adaptive panoptic segmentation transformer that is simple but can achieve domain adaptive instance segmentation and semantic segmentation simultaneously within a single network. UniDAformer introduces Hierarchical Mask Calibration (HMC) that rectifies inaccurate predictions at the level of regions, superpixels and pixels via online self-training on the fly. It has three unique features: 1) it enables unified domain adaptive panoptic adaptation; 2) it mitigates false predictions and improves domain adaptive panoptic segmentation effectively; 3) it is end-to-end trainable with a much simpler training and inference pipeline. Extensive experiments over multiple public benchmarks show that UniDAformer achieves superior domain adaptive panoptic segmentation as compared with the state-of-the-art.