论文标题
加速的投影梯度算法用于稀疏性约束优化问题
Accelerated projected gradient algorithms for sparsity constrained optimization problems
论文作者
论文摘要
我们考虑了非convex最佳子集选择问题的预计梯度算法,该算法在$ \ ell_0 $ -norm约束下最小化给定的经验损失函数。通过将可行的稀疏性约束分解为线性子空间的有限结合,我们提出了两个具有全局收敛保证的加速度方案,一个是通过同一空间外推,另一个通过子空间识别。前者完全利用问题结构来大大加速优化速度,而仅额外的成本可忽略不计。后者导致了两阶段的元算法,该元数首先使用经典的投影梯度迭代来识别包含最佳解决方案的正确子空间,然后切换到已识别的子空间中的高效平稳优化方法以达到超线性收敛。实验表明,所提出的加速算法比非加速对应物以及最新技术更快。
We consider the projected gradient algorithm for the nonconvex best subset selection problem that minimizes a given empirical loss function under an $\ell_0$-norm constraint. Through decomposing the feasible set of the given sparsity constraint as a finite union of linear subspaces, we present two acceleration schemes with global convergence guarantees, one by same-space extrapolation and the other by subspace identification. The former fully utilizes the problem structure to greatly accelerate the optimization speed with only negligible additional cost. The latter leads to a two-stage meta-algorithm that first uses classical projected gradient iterations to identify the correct subspace containing an optimal solution, and then switches to a highly-efficient smooth optimization method in the identified subspace to attain superlinear convergence. Experiments demonstrate that the proposed accelerated algorithms are magnitudes faster than their non-accelerated counterparts as well as the state of the art.