论文标题

稀疏的贝叶斯套索通过可变的$ \ ell_1 $罚款

Sparse Bayesian Lasso via a Variable-Coefficient $\ell_1$ Penalty

论文作者

Wycoff, Nathan, Arab, Ali, Donato, Katharine M., Singh, Lisa O.

论文摘要

现代的统计学习算法能够具有惊人的灵活性,但要在解释性方面挣扎。一种可能的解决方案是稀疏性:进行推理,以使许多参数估计为相同0,可以通过使用非平滑惩罚(例如$ \ ell_1 $惩罚)来施加。但是,当需要高稀疏性时,$ \ ell_1 $罚款会引入重大偏见。在本文中,我们保留了$ \ ell_1 $罚款,但定义了可学习的罚款权重$λ_p$,enderpriors。我们通过调查这提出的优化问题开始文章,开发与$ \ ell_1 $ norm的近端运算符。然后,我们在受惩罚可能性的情况下研究此变量循环$ \ ell_1 $罚款的理论属性。接下来,我们调查这种惩罚对变分贝叶斯的应用,开发了一个我们称为稀疏的贝叶斯套索的模型,该模型允许在定性上进行行为,例如套索回归,将其应用于任意变分模型。在仿真研究中,这为我们提供了基于模拟方法的不确定性定量和低偏差特性,其计算较小。最后,我们将方法论应用于2013 - 2017年伊拉克内战期间发生的贝叶斯滞后时空回归模型。

Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability. One possible solution is sparsity: making inference such that many of the parameters are estimated as being identically 0, which may be imposed through the use of nonsmooth penalties such as the $\ell_1$ penalty. However, the $\ell_1$ penalty introduces significant bias when high sparsity is desired. In this article, we retain the $\ell_1$ penalty, but define learnable penalty weights $λ_p$ endowed with hyperpriors. We start the article by investigating the optimization problem this poses, developing a proximal operator associated with the $\ell_1$ norm. We then study the theoretical properties of this variable-coefficient $\ell_1$ penalty in the context of penalized likelihood. Next, we investigate application of this penalty to Variational Bayes, developing a model we call the Sparse Bayesian Lasso which allows for behavior qualitatively like Lasso regression to be applied to arbitrary variational models. In simulation studies, this gives us the Uncertainty Quantification and low bias properties of simulation-based approaches with an order of magnitude less computation. Finally, we apply our methodology to a Bayesian lagged spatiotemporal regression model of internal displacement that occurred during the Iraqi Civil War of 2013-2017.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源