论文标题
D3C2-NET:用于压缩感应的双域深卷积编码网络
D3C2-Net: Dual-Domain Deep Convolutional Coding Network for Compressive Sensing
论文作者
论文摘要
通过将迭代优化算法映射到神经网络(NNS)中,深层展开的网络(DUNS)表现出明确且可解释的结构,并在压缩传感(CS)领域取得了显着的成功。但是,大多数现有的DUN仅依赖于图像域的展开,这限制了信息传输能力和重建灵活性,从而导致其图像细节的丢失和不满意的性能。为了克服这些局限性,本文开发了一个双域优化框架,该框架结合了(1)图像 - 和(2)卷积编码域的先验,并为CS和其他逆想任务提供一般性。通过将该优化框架转换为深NN结构,我们提出了双域深卷积编码网络(D3C2-NET),该网络具有有效地在其所有展开阶段中有效传递高能力自适应卷积特征的能力。我们对模拟和真实捕获的数据进行了理论分析和实验,涵盖了2D和3D自然,医学和科学信号,证明了我们的方法在精确,复杂性和解释能力之间取得平衡的有效性,实用性,卓越的性能以及我们的方法超过其他竞争方法的巨大潜力。代码可在https://github.com/lwq20020127/d3c2-net上找到。
By mapping iterative optimization algorithms into neural networks (NNs), deep unfolding networks (DUNs) exhibit well-defined and interpretable structures and achieve remarkable success in the field of compressive sensing (CS). However, most existing DUNs solely rely on the image-domain unfolding, which restricts the information transmission capacity and reconstruction flexibility, leading to their loss of image details and unsatisfactory performance. To overcome these limitations, this paper develops a dual-domain optimization framework that combines the priors of (1) image- and (2) convolutional-coding-domains and offers generality to CS and other inverse imaging tasks. By converting this optimization framework into deep NN structures, we present a Dual-Domain Deep Convolutional Coding Network (D3C2-Net), which enjoys the ability to efficiently transmit high-capacity self-adaptive convolutional features across all its unfolded stages. Our theoretical analyses and experiments on simulated and real captured data, covering 2D and 3D natural, medical, and scientific signals, demonstrate the effectiveness, practicality, superior performance, and generalization ability of our method over other competing approaches and its significant potential in achieving a balance among accuracy, complexity, and interpretability. Code is available at https://github.com/lwq20020127/D3C2-Net.