论文标题
剩余的SWIN Transformer通道注意网络用于图像Demosaicing
Residual Swin Transformer Channel Attention Network for Image Demosaicing
论文作者
论文摘要
图像Demosaicing是从原始传感器(颜色过滤器阵列)数据中插值全分辨率颜色图像的问题。在过去的十年中,深度神经网络已被广泛用于图像恢复,尤其是在示波化中,从而获得了显着的性能改善。近年来,视觉变压器已被设计并成功地用于各种计算机视觉应用中。与基于神经网络的方法相比,基于Swin Transformer(ST)的最新图像恢复方法之一,其参数数量少。受Swinir成功的启发,我们在本文中提出了一个基于Swin Transformer的新型网络,用于图像Demosaicing,称为Rstcanet。为了提取图像特征,RSTCANET堆叠了几个残留的Swin Transformer通道注意块(RSTCAB),从而引入了每个连续的ST块的通道注意力。广泛的实验表明,RSTCANET会执行最先进的图像表演方法,并且具有较少数量的参数。
Image demosaicing is problem of interpolating full- resolution color images from raw sensor (color filter array) data. During last decade, deep neural networks have been widely used in image restoration, and in particular, in demosaicing, attaining significant performance improvement. In recent years, vision transformers have been designed and successfully used in various computer vision applications. One of the recent methods of image restoration based on a Swin Transformer (ST), SwinIR, demonstrates state-of-the-art performance with a smaller number of parameters than neural network-based methods. Inspired by the success of SwinIR, we propose in this paper a novel Swin Transformer-based network for image demosaicing, called RSTCANet. To extract image features, RSTCANet stacks several residual Swin Transformer Channel Attention blocks (RSTCAB), introducing the channel attention for each two successive ST blocks. Extensive experiments demonstrate that RSTCANet out- performs state-of-the-art image demosaicing methods, and has a smaller number of parameters.