论文标题
修剪套索:稀疏的恢复保证和实用的优化。
The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty
论文作者
论文摘要
我们提出了一种解决稀疏近似或最佳子集选择问题的新方法,即找到$ k $ -sparse vector $ {\ bf x} \ in \ Mathbb {r}^d $最小化$ \ ell_2 $ bf $ \ lvert a {\ bf x} $ _________的$ \ ell_2 $ \ y}我们考虑了一种正规化方法,该方法被非convex $ \ textit {trimmed lasso} $惩罚,定义为$ {\ bf x} $的$ \ ell_1 $ -norm,不包括其$ k $ k $ $ k $最大含量。我们证明,修剪的拉索具有几种吸引人的理论特性,尤其是稀疏的恢复保证,假设成功地优化了受罚目标。接下来,我们从经验上表明,直接优化该目标可能非常具有挑战性。取而代之的是,我们为修剪的套索提出了一个替代物,称为$ \ textit {generalized soft-min} $。这种惩罚在经典的拉索和修剪的套索之间平稳地插入,同时考虑到所有可能的$ k $ -sparse模式。广义软罚款涉及$ \ binom {d} {k} $项的总和,但我们得出了一个多项式时间算法来计算它。反过来,这为原始的稀疏近似问题提供了一种实用方法。通过模拟,我们证明了与当前最新现状相比,其竞争性能。
We present a new approach to solve the sparse approximation or best subset selection problem, namely find a $k$-sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$. We consider a regularized approach, whereby this residual is penalized by the non-convex $\textit{trimmed lasso}$, defined as the $\ell_1$-norm of ${\bf x}$ excluding its $k$ largest-magnitude entries. We prove that the trimmed lasso has several appealing theoretical properties, and in particular derive sparse recovery guarantees assuming successful optimization of the penalized objective. Next, we show empirically that directly optimizing this objective can be quite challenging. Instead, we propose a surrogate for the trimmed lasso, called the $\textit{generalized soft-min}$. This penalty smoothly interpolates between the classical lasso and the trimmed lasso, while taking into account all possible $k$-sparse patterns. The generalized soft-min penalty involves summation over $\binom{d}{k}$ terms, yet we derive a polynomial-time algorithm to compute it. This, in turn, yields a practical method for the original sparse approximation problem. Via simulations, we demonstrate its competitive performance compared to current state of the art.