Automatic bias correction for testing in high‐dimensional linear models

Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for ℓ1$$ {\ell}_1 $$‐regularized M‐estimators to cope with non‐Gaussian distributed regression...

Full description

Saved in:
Bibliographic Details
Published in:Statistica Neerlandica Vol. 77; no. 1; pp. 71 - 98
Main Authors: Zhou, Jing, Claeskens, Gerda
Format: Journal Article
Language:English
Published: Oxford Blackwell Publishing Ltd 01.02.2023
Subjects:
ISSN:0039-0402, 1467-9574
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for ℓ1$$ {\ell}_1 $$‐regularized M‐estimators to cope with non‐Gaussian distributed regression errors, using the robust approximate message passing algorithm. The proposed framework enjoys an automatically built‐in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response. The estimator compares numerically well with the debiased and desparsified approaches while using the least squares loss function. The use of the Huber loss function demonstrates that the proposed construction provides stable confidence intervals under different regression error distributions.
Bibliography:Funding information
Fonds Wetenschappelijk Onderzoek Junior Postdoc Fellowship, KU Leuven Research Fund, Grant/Award Number: C16/20/002
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0039-0402
1467-9574
DOI:10.1111/stan.12274