论文标题
CNN与Transformer Robustness比赛的公正性
An Impartial Take to the CNN vs Transformer Robustness Contest
论文作者
论文摘要
随着变压器在计算机视觉中普及的激增,一些研究试图确定它们是否可以比卷积神经网络(CNN)更适合分配变化并提供更好的不确定性估计。几乎一致的结论是它们是,并且通常或多或少明确地猜想这种所谓的优势的原因是归因于自我注意力的机制。在本文中,我们进行了广泛的经验分析,表明最近最新的CNN(尤其是Convnext)可以比当前的最新变压器更强大,可靠,有时甚至更多。但是,没有明显的赢家。因此,尽管它很容易陈述一个建筑家族比另一种建筑的确定性优势,但他们似乎在各种任务上享有类似的非凡表演,同时也遭受了类似的脆弱性,例如纹理,背景和简单性偏见。
Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs). The almost unanimous conclusion is that they are, and it is often conjectured more or less explicitly that the reason of this supposed superiority is to be attributed to the self-attention mechanism. In this paper we perform extensive empirical analyses showing that recent state-of-the-art CNNs (particularly, ConvNeXt) can be as robust and reliable or even sometimes more than the current state-of-the-art Transformers. However, there is no clear winner. Therefore, although it is tempting to state the definitive superiority of one family of architectures over another, they seem to enjoy similar extraordinary performances on a variety of tasks while also suffering from similar vulnerabilities such as texture, background, and simplicity biases.