论文标题
S4nd:使用状态空间将图像和视频建模为多维信号
S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces
论文作者
论文摘要
图像和视频等视觉数据通常被建模为固有连续的多维信号的离散化。现有的连续信号模型试图通过直接对视觉(例如图像)数据的基本信号进行建模来利用这一事实。但是,这些模型尚未能够在实用视觉任务(例如大型图像和视频分类)上实现竞争性能。在最新的“深度空间模型”(SSM)的最新工作基础上,我们提出了S4ND,这是一种新的多维SSM层,该层扩展了SSM的连续信号建模能力到包括图像和视频在内的多维数据。我们表明,S4nd可以以$ 1 $ d,$ 2 $ d和$ 3 $ d为连续的多维信号对大规模的视觉数据进行建模,并通过简单地将CORV2D和自我注意力层与S4nd Loers在现有的最新机构模型中展示出强大的性能。在ImagEnet-1k上,S4nd在使用$ 1 $ D的补丁序列训练时,S4nd超过了视觉变压器基线的性能,$ 1.5 \%$,并在为$ 2 $ d的图像建模时与Convnext进行匹配。对于视频,S4nd在HMDB-51上的活动分类中提高了$ 3 $ d的Convnext,提高了$ 4 \%$。 S4nd隐式地学习了通过构造不变的全球持续卷积内核,提供了一种归纳偏见,从而使多种决议能够概括。通过为S4开发一个简单的限制性修改以克服混叠,S4nd实现了强劲的零击(在培训时间看不见)的分辨率性能,在CIFAR-10上的基线Conv2D优于$ 40 \%\%$ $ $ $ $ $ $ $ $ $ $ 8 \ $ 8 \ times times timess 8 $,并在$ 32 \ $ 32 \ times 32 $ 32 $图像上进行了测试。 S4nd接受渐进调整大小的培训时,S4nd在高分辨率型号的$ \ sim 1 \%$范围内,同时培训$ 22 \%$ $。
Visual data such as images and videos are typically modeled as discretizations of inherently continuous, multidimensional signals. Existing continuous-signal models attempt to exploit this fact by modeling the underlying signals of visual (e.g., image) data directly. However, these models have not yet been able to achieve competitive performance on practical vision tasks such as large-scale image and video classification. Building on a recent line of work on deep state space models (SSMs), we propose S4ND, a new multidimensional SSM layer that extends the continuous-signal modeling ability of SSMs to multidimensional data including images and videos. We show that S4ND can model large-scale visual data in $1$D, $2$D, and $3$D as continuous multidimensional signals and demonstrates strong performance by simply swapping Conv2D and self-attention layers with S4ND layers in existing state-of-the-art models. On ImageNet-1k, S4ND exceeds the performance of a Vision Transformer baseline by $1.5\%$ when training with a $1$D sequence of patches, and matches ConvNeXt when modeling images in $2$D. For videos, S4ND improves on an inflated $3$D ConvNeXt in activity classification on HMDB-51 by $4\%$. S4ND implicitly learns global, continuous convolutional kernels that are resolution invariant by construction, providing an inductive bias that enables generalization across multiple resolutions. By developing a simple bandlimiting modification to S4 to overcome aliasing, S4ND achieves strong zero-shot (unseen at training time) resolution performance, outperforming a baseline Conv2D by $40\%$ on CIFAR-10 when trained on $8 \times 8$ and tested on $32 \times 32$ images. When trained with progressive resizing, S4ND comes within $\sim 1\%$ of a high-resolution model while training $22\%$ faster.