论文标题

Deit III:VIT的复仇

DeiT III: Revenge of the ViT

论文作者

Touvron, Hugo, Cord, Matthieu, Jégou, Hervé

论文摘要

视觉变压器(VIT)是一种简单的神经体系结构,可容纳几个计算机视觉任务。它具有有限的内置建筑先验,与最新的架构相比,该架构包含有关输入数据或特定任务的先验。最近的作品表明,VIT受益于自我监督的预训练,特别是像BET一样的伯特式预训练。在本文中,我们重新审查了对VIT的监督培训。我们的程序基于并简化了用于培训Resnet-50的食谱。它包括一个新的简单数据启发过程,只有3个增强措施,更接近自学学习的实践。我们对图像分类的评估(Imagenet-1k,有和不在Imagenet-21K上进行预训练),转移学习和语义细分表明,我们的程序的表现优于先前的VIT的全面监督培训配方。它还揭示了我们通过监督训练的VIT训练的表现与最近的架构相当。我们的结果可以作为最近在VIT上展示的最新自学方法的更好基准。

A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源