论文标题
自适应块压缩感应:实时和低复杂性实现
Adaptive Block Compressive Sensing: towards a real-time and low-complexity implementation
论文作者
论文摘要
基于自适应块的压缩传感(ABC)算法是在使用单像素摄像机,多像素摄像机或焦点飞机处理器上的资源约束图像和视频传感平台上实现的实现的背景下进行了研究。在本文中,我们介绍了两种新型的ABC算法,这些算法适用于压缩感测图像或编码内部的视频帧。两者都使用确定性的2D-DCT词典在感测图像而不是随机词典时使用。第一个使用较少数量的压缩度测量来计算每个图像块周围的块边界变化(BBV),从中估算了从每个块中测量的2D-DCT变换系数的数量。第二种使用较少数量的DCT域(DD)测量值来估计从每个块捕获的变换系数的总数。这两种算法允许实时重建,分别使用简单的逆2D-DCT操作,平均为256x256和512x512灰度图像为8 ms和26 ms,而无需GPU加速。此外,我们表明,受基于Denoising的近似消息传递算法的启发,可以用作后处理,质量增强技术。 IDA将实时操作交易,以分别以PSNR和SSIM为单位的最先进的GPU辅助算法和0.0152。它还超过了0.4 dB和SSIM的最先进的深神经网络的PSNR性能。
Adaptive block-based compressive sensing (ABCS) algorithms are studied in the context of the practical realization of compressive sensing on resource-constrained image and video sensing platforms that use single-pixel cameras, multi-pixel cameras or focal plane processing sensors. In this paper, we introduce two novel ABCS algorithms that are suitable for compressively sensing images or intra-coded video frames. Both use deterministic 2D-DCT dictionaries when sensing the images instead of random dictionaries. The first uses a low number of compressive measurements to compute the block boundary variation (BBV) around each image block, from which it estimates the number of 2D-DCT transform coefficients to measure from each block. The second uses a low number of DCT domain (DD) measurements to estimate the total number of transform coefficients to capture from each block. The two algorithms permit reconstruction in real time, averaging 8 ms and 26 ms for 256x256 and 512x512 greyscale images, respectively, using a simple inverse 2D-DCT operation without requiring GPU acceleration. Furthermore, we show that an iterative compressive sensing reconstruction algorithm (IDA), inspired by the denoising-based approximate message passing algorithm, can be used as a post-processing, quality enhancement technique. IDA trades off real-time operation to yield performance improvement over state-of-the-art GPU-assisted algorithms of 1.31 dB and 0.0152 in terms of PSNR and SSIM, respectively. It also exceeds the PSNR performance of a state-of-the-art deep neural network by 0.4 dB and SSIM by 0.0126.