论文标题
模型感知的学习方法的逆问题方法
Model-Aware Regularization For Learning Approaches To Inverse Problems
论文作者
论文摘要
有各种反问题 - 包括在医学成像中引起的重建问题 - 人们经常意识到将感兴趣的变量映射到观察结果的前向操作员。因此,很自然地询问是否可以在越来越多地使用用于解决反问题的深度学习方法中利用这种对远期操作员的知识。 在本文中,我们通过分析适用于反问题的深度学习方法的概括误差来提供一种方式。特别是,通过构建算法鲁棒性框架,我们提供了一个概括误差,该误差构成了与学习问题相关的关键成分,例如数据空间的复杂性,训练集的大小,深神经网络的雅各比式以及远期操作员与神经网络的组成的雅各比式。然后,我们提出了一个“插件”的正规机,以利用向前图的知识来改善网络的概括。同样,我们还提出了一种新方法,使我们能够在相关函数的Lipschitz常数紧密地绑定比现有功能的计算效率更高。我们证明了我们模型感知的正规化深度学习算法对涉及各种子采样操作员的其他最先进的方法的功效,例如在经典压缩传感设置和加速磁共振成像(MRI)中使用的效果。
There are various inverse problems -- including reconstruction problems arising in medical imaging -- where one is often aware of the forward operator that maps variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. In this paper, we provide one such way via an analysis of the generalisation error of deep learning methods applicable to inverse problems. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the size of the training set, the Jacobian of the deep neural network and the Jacobian of the composition of the forward operator with the neural network. We then propose a 'plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also propose a new method allowing us to tightly upper bound the Lipschitz constants of the relevant functions that is much more computational efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and accelerated Magnetic Resonance Imaging (MRI).