论文标题

无参考超分辨率图像质量评估的质地感知联合学习

Textural-Perceptual Joint Learning for No-Reference Super-Resolution Image Quality Assessment

论文作者

Liu, Yuqing, Jia, Qi, Wang, Shanshe, Ma, Siwei, Gao, Wen

论文摘要

近年来,已经广泛研究了图像超分辨率(SR)。但是,公平地估计各种SR方法的性能,因为缺乏可靠和准确的标准,这是一项挑战。现有的指标集中于特定的降级类型,而没有区分视觉敏感区域,而视觉敏感区域无法描述低级质地和高级感知信息中各种SR变性情况的能力。在本文中,我们着重于SR图像的质地和感知降解,并设计双流网络,以共同探索称为TPNET的质量评估的质地和感知信息。通过模仿对重要图像区域的人类视觉系统(HVS),我们开发了空间注意力,以使视觉敏感信息更加区分,并利用特征归一化(F-NORM)来促进网络表示。实验结果表明,TPNET可以预测比其他方法更准确的视觉质量评分,并证明了与人类的观点更好的一致性。源代码将在\ url {http://github.com/yuqing-liu-dut/nriqa_sr}中获得

Image super-resolution (SR) has been widely investigated in recent years. However, it is challenging to fairly estimate the performance of various SR methods, as the lack of reliable and accurate criteria for the perceptual quality. Existing metrics concentrate on the specific kind of degradation without distinguishing the visual sensitive areas, which have no ability to describe the diverse SR degeneration situations in both low-level textural and high-level perceptual information. In this paper, we focus on the textural and perceptual degradation of SR images, and design a dual stream network to jointly explore the textural and perceptual information for quality assessment, dubbed TPNet. By mimicking the human vision system (HVS) that pays more attention to the significant image areas, we develop the spatial attention to make the visual sensitive information more distinguishable and utilize feature normalization (F-Norm) to boost the network representation. Experimental results show the TPNet predicts the visual quality score more accurate than other methods and demonstrates better consistency with the human's perspective. The source code will be available at \url{http://github.com/yuqing-liu-dut/NRIQA_SR}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源