A Unified Algorithm for Penalized Convolution Smoothed Quantile Regression

Penalized quantile regression (QR) is widely used for studying the relationship between a response variable and a set of predictors under data heterogeneity in high-dimensional settings. Compared to penalized least squares, scalable algorithms for fitting penalized QR are lacking due to the non-diff...

Full description

Saved in:
Bibliographic Details
Published in:Journal of computational and graphical statistics Vol. 33; no. 2; pp. 625 - 637
Main Authors: Man, Rebeka, Pan, Xiaoou, Tan, Kean Ming, Zhou, Wen-Xin
Format: Journal Article
Language:English
Published: Alexandria Taylor & Francis 02.04.2024
Taylor & Francis Ltd
Subjects:
ISSN:1061-8600, 1537-2715
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Penalized quantile regression (QR) is widely used for studying the relationship between a response variable and a set of predictors under data heterogeneity in high-dimensional settings. Compared to penalized least squares, scalable algorithms for fitting penalized QR are lacking due to the non-differentiable piecewise linear loss function. To overcome the lack of smoothness, a recently proposed convolution-type smoothed method brings an interesting tradeoff between statistical accuracy and computational efficiency for both standard and penalized quantile regressions. In this article, we propose a unified algorithm for fitting penalized convolution smoothed quantile regression with various commonly used convex penalties, accompanied by an R-language package conquer available from the Comprehensive R Archive Network. We perform extensive numerical studies to demonstrate the superior performance of the proposed algorithm over existing methods in both statistical and computational aspects. We further exemplify the proposed algorithm by fitting a fused lasso additive QR model on the world happiness data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1061-8600
1537-2715
DOI:10.1080/10618600.2023.2275999