论文标题
当未经监督的未经切割图像预估计时,视觉模型更加健壮和公平
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
论文作者
论文摘要
歧视性的自我监督学习允许在任何随机的Internet图像组上进行培训模型,并可能恢复有助于区分图像的显着信息。应用于ImageNet,这导致以对象为中心的功能,这些功能在大多数以对象为中心的下游任务上与监督功能相同。在这项工作中,我们质疑是否使用这种能力,我们可以学习来自全球各种无限图像集中的任何显着和代表性的信息。为此,我们在数十亿个随机图像上训练模型,而没有任何数据预处理或先前的假设,即我们希望该模型学习的内容。我们将模型大小扩展到致密的100亿个参数,以避免在较大的数据大小上不适用。我们在50多个基准上进行了广泛的研究和验证模型性能,包括公平性,稳健性,分配变化,地理多样性,细粒度识别,图像副本检测和许多图像分类数据集。最终的模型不仅捕获了良好的语义信息,而且还捕获了有关艺术风格的信息,并仅基于视觉内容来学习明显的信息,例如地理位置和多语言单词嵌入。更重要的是,我们发现这种模型比在ImageNet等中心数据集中训练的监督模型或训练的模型更强大,更公平,有害和偏见。
Discriminative self-supervised learning allows training models on any random group of internet images, and possibly recover salient information that helps differentiate between the images. Applied to ImageNet, this leads to object centric features that perform on par with supervised features on most object-centric downstream tasks. In this work, we question if using this ability, we can learn any salient and more representative information present in diverse unbounded set of images from across the globe. To do so, we train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn. We scale our model size to dense 10 billion parameters to avoid underfitting on a large data size. We extensively study and validate our model performance on over 50 benchmarks including fairness, robustness to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets. The resulting model, not only captures well semantic information, it also captures information about artistic style and learns salient information such as geolocations and multilingual word embeddings based on visual content only. More importantly, we discover that such model is more robust, more fair, less harmful and less biased than supervised models or models trained on object centric datasets such as ImageNet.