论文标题
Kmax-Deeplab:K-Means面具变压器
kMaX-DeepLab: k-means Mask Transformer
论文作者
论文摘要
视觉任务中变形金刚的兴起不仅可以推进网络骨干设计,而且还启动了一个崭新的页面,以实现端到端的图像识别(例如,对象检测和泛型分段)。源自自然语言处理(NLP)的变压器体系结构,由自我注意力和交叉注意力组成,有效地学习顺序中元素之间的远距离相互作用。但是,我们观察到,大多数现有的基于变压器的视觉模型只是从NLP借用了这个想法,忽略了语言和图像之间的关键差异,尤其是空间扁平的像素特征的极高序列长度。随后,这阻碍了像素特征和对象查询之间的交叉注意力学习。在本文中,我们重新考虑像素和对象查询之间的关系,并建议将跨注意力学习作为一个聚类过程进行重新制定。受传统K-均值聚类算法的启发,我们开发了用于分割任务的K-Means面具Xformer(Kmax-Deeplab),这不仅改善了最先进的艺术品,而且还享有简单而优雅的设计。结果,我们的Kmax-Deeplab在可可瓦尔(Coco Val)上取得了新的最先进性能,其套件为58.0%PQ,CityScapes Val设置为68.4%PQ,44.0%AP和83.5%MIOU和83.5%MIOU和ADE20K VAL设置,并设置为50.9%PQ和55.2%MIOU,而无需测试MIOU或外部测试。我们希望我们的工作能够阐明设计为视觉任务量身定制的变压器。 tensorflow代码和型号可在https://github.com/google-research/deeplab2上获得Pytorch re-Implementation,请访问https://github.com/kmax-deeplab
The rise of transformers in vision tasks not only advances network backbone designs, but also starts a brand-new page to achieve end-to-end image recognition (e.g., object detection and panoptic segmentation). Originated from Natural Language Processing (NLP), transformer architectures, consisting of self-attention and cross-attention, effectively learn long-range interactions between elements in a sequence. However, we observe that most existing transformer-based vision models simply borrow the idea from NLP, neglecting the crucial difference between languages and images, particularly the extremely large sequence length of spatially flattened pixel features. This subsequently impedes the learning in cross-attention between pixel features and object queries. In this paper, we rethink the relationship between pixels and object queries and propose to reformulate the cross-attention learning as a clustering process. Inspired by the traditional k-means clustering algorithm, we develop a k-means Mask Xformer (kMaX-DeepLab) for segmentation tasks, which not only improves the state-of-the-art, but also enjoys a simple and elegant design. As a result, our kMaX-DeepLab achieves a new state-of-the-art performance on COCO val set with 58.0% PQ, Cityscapes val set with 68.4% PQ, 44.0% AP, and 83.5% mIoU, and ADE20K val set with 50.9% PQ and 55.2% mIoU without test-time augmentation or external dataset. We hope our work can shed some light on designing transformers tailored for vision tasks. TensorFlow code and models are available at https://github.com/google-research/deeplab2 A PyTorch re-implementation is also available at https://github.com/bytedance/kmax-deeplab