论文标题
固定点自动差异的前向后分裂算法的部分平滑功能
Fixed-Point Automatic Differentiation of Forward--Backward Splitting Algorithms for Partly Smooth Functions
论文作者
论文摘要
一类非平滑实用优化问题可以写成,以最大程度地减少平滑且部分平滑的功能。我们检查了此类结构化问题,这些问题也取决于参数矢量,并研究了将其解决方案映射相对于参数的问题,该参数在灵敏度分析和参数学习问题中具有很大的应用。在部分平滑度和其他温和的假设下,我们将隐式(ID)和自动分化(AD)应用于近端分裂算法的固定点迭代。我们表明,这些算法生成的序列的AD收敛(在其他假设下线性地线)与溶液映射的衍生物。对于一种自动差异的变体,我们称定点自动分化(FPAD),我们纠正了反向模式AD的内存开销问题,此外,理论上提供了更快的收敛速度。我们从数值上说明了套索和组套索问题的AD和FPAD的收敛性和收敛速率,并通过学习正则化项来证明FPAD在原型图像上的典型图像降解问题的工作。
A large class of non-smooth practical optimization problems can be written as minimization of a sum of smooth and partly smooth functions. We examine such structured problems which also depend on a parameter vector and study the problem of differentiating its solution mapping with respect to the parameter which has far reaching applications in sensitivity analysis and parameter learning problems. Under partial smoothness and other mild assumptions, we apply Implicit (ID) and Automatic Differentiation (AD) to the fixed-point iterations of proximal splitting algorithms. We show that AD of the sequence generated by these algorithms converges (linearly under further assumptions) to the derivative of the solution mapping. For a variant of automatic differentiation, which we call Fixed-Point Automatic Differentiation (FPAD), we remedy the memory overhead problem of the Reverse Mode AD and moreover provide faster convergence theoretically. We numerically illustrate the convergence and convergence rates of AD and FPAD on Lasso and Group Lasso problems and demonstrate the working of FPAD on prototypical image denoising problems by learning the regularization term.