论文标题
统一的算法,用于惩罚卷积平滑的分位数回归
A Unified Algorithm for Penalized Convolution Smoothed Quantile Regression
论文作者
论文摘要
惩罚的分位数回归(QR)广泛用于研究高维环境中数据异质性下的响应变量与一组预测因子之间的关系。与受惩罚的最小二乘相比,由于非差异分段线性损耗函数,缺少适合惩罚QR的可扩展算法。为了克服缺乏平滑度,最近提出的卷积式平滑方法在统计准确性和计算效率之间取决于标准和惩罚的分数回归之间的折衷。在本文中,我们提出了一种统一的算法,用于拟合惩罚的卷积平滑回归,并带有各种常用的凸面惩罚,并伴随着综合R档案网络可用的R语言包装征服。我们进行了广泛的数值研究,以证明在统计和计算方面,所提出的算法比现有方法的出色性能。我们通过将融合的套索添加剂模型拟合到世界幸福数据中,进一步体现了所提出的算法。
Penalized quantile regression (QR) is widely used for studying the relationship between a response variable and a set of predictors under data heterogeneity in high-dimensional settings. Compared to penalized least squares, scalable algorithms for fitting penalized QR are lacking due to the non-differentiable piecewise linear loss function. To overcome the lack of smoothness, a recently proposed convolution-type smoothed method brings an interesting tradeoff between statistical accuracy and computational efficiency for both standard and penalized quantile regressions. In this paper, we propose a unified algorithm for fitting penalized convolution smoothed quantile regression with various commonly used convex penalties, accompanied by an R-language package conquer available from the Comprehensive R Archive Network. We perform extensive numerical studies to demonstrate the superior performance of the proposed algorithm over existing methods in both statistical and computational aspects. We further exemplify the proposed algorithm by fitting a fused lasso additive QR model on the world happiness data.