论文标题
基于Itinex的算法展开和调整的低光图像增强
Low-light Image Enhancement by Retinex Based Algorithm Unrolling and Adjustment
论文作者
论文摘要
由于他们最近的进步,深度学习技术已被广泛应用于弱光图像增强(谎言)问题。其中,基于Itinex理论的理论主要是根据分解调整管道的,由于其物理解释和有前途的表现,这是一个重要的位置。但是,目前对基于视网膜的深度学习的调查仍然不够,忽略了传统方法中的许多有用的经验。此外,调整步骤要么通过简单的图像处理技术执行,要么通过复杂的网络进行,在实践中,这两个网络都不令人满意。为了解决这些问题,我们为谎言问题提出了一个新的深度学习框架。所提出的框架包含一个受算法展开启发的分解网络,并考虑了考虑全球亮度和局部亮度敏感性的调整网络。借助算法展开,从传统方法借用的数据和明确的先验中学到的隐式先验都可以嵌入网络中,从而促进更好的分解。同时,考虑全球和本地亮度可以指导设计简单而有效的网络模块以进行调整。此外,为了避免手动参数调整,我们还提出了一种自我监督的微调策略,该策略始终可以保证有希望的表现。与现有方法相比,一系列典型Lie数据集的实验证明了所提出方法的有效性。
Motivated by their recent advances, deep learning techniques have been widely applied to low-light image enhancement (LIE) problem. Among which, Retinex theory based ones, mostly following a decomposition-adjustment pipeline, have taken an important place due to its physical interpretation and promising performance. However, current investigations on Retinex based deep learning are still not sufficient, ignoring many useful experiences from traditional methods. Besides, the adjustment step is either performed with simple image processing techniques, or by complicated networks, both of which are unsatisfactory in practice. To address these issues, we propose a new deep learning framework for the LIE problem. The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity. By virtue of algorithm unrolling, both implicit priors learned from data and explicit priors borrowed from traditional methods can be embedded in the network, facilitate to better decomposition. Meanwhile, the consideration of global and local brightness can guide designing simple yet effective network modules for adjustment. Besides, to avoid manually parameter tuning, we also propose a self-supervised fine-tuning strategy, which can always guarantee a promising performance. Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.