论文标题

多尺度卷积变压器具有中心面膜预处理进行高光谱图像分类

Multiscale Convolutional Transformer with Center Mask Pretraining for Hyperspectral Image Classification

论文作者

Jia, Sen, Wang, Yifan

论文摘要

高光谱图像(HSI)不仅具有广泛的宏观视野,而且包含丰富的光谱信息,并且可以通过光谱信息来识别表面对象的类型,这是近几年中的高光谱图像中的主要应用之一。在近年来,已经提出了越来越深的深度学习方法,其中提出了哪种卷积neural网络(CNN)。但是,基于CNN的方法很难捕获长期依赖性,也需要大量的标记数据进行模型训练。看书,HSI分类领域中的大多数自我监督的训练方法基于输入样品的重建,并且很难有效地使用未标记的样品。为了解决CNN网络的缺点,我们提出了一个Noval多尺度卷积嵌入模块,以实现有效提取空间元素信息,可以更好地与变形金刚网络结合使用,以便更有效地使用无标记的数据,我们提出了一个新的自我使用的假定程序。与蒙版自动编码器类似,但是我们的预训练方法仅掩盖编码器中中央像素的相应令牌,并输入剩余的令牌到解码器中,以重建中央像素的光谱信息。预先掩护可以更好地模拟中心特征和域功能之间的关系,并获得更稳定的培训结果,并获得更稳定的培训结果。

Hyperspectral images (HSI) not only have a broad macroscopic field of view but also contain rich spectral information, and the types of surface objects can be identified through spectral information, which is one of the main applications in hyperspectral image related research.In recent years, more and more deep learning methods have been proposed, among which convolutional neural networks (CNN) are the most influential. However, CNN-based methods are difficult to capture long-range dependencies, and also require a large amount of labeled data for model training.Besides, most of the self-supervised training methods in the field of HSI classification are based on the reconstruction of input samples, and it is difficult to achieve effective use of unlabeled samples. To address the shortcomings of CNN networks, we propose a noval multi-scale convolutional embedding module for HSI to realize effective extraction of spatial-spectral information, which can be better combined with Transformer network.In order to make more efficient use of unlabeled data, we propose a new self-supervised pretask. Similar to Mask autoencoder, but our pre-training method only masks the corresponding token of the central pixel in the encoder, and inputs the remaining token into the decoder to reconstruct the spectral information of the central pixel.Such a pretask can better model the relationship between the central feature and the domain feature, and obtain more stable training results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源