Splitting strategies for post-selection inference

Summary We consider the problem of providing valid inference for a selected parameter in a sparse regression setting. It is well known that classical regression tools can be unreliable in this context because of the bias generated in the selection step. Many approaches have been proposed in recent y...

Full description

Saved in:
Bibliographic Details
Published in:Biometrika Vol. 110; no. 3; pp. 597 - 614
Main Authors: Rasines, D García, Young, G A
Format: Journal Article
Language:English
Published: Oxford University Press 01.09.2023
Subjects:
ISSN:0006-3444, 1464-3510
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary We consider the problem of providing valid inference for a selected parameter in a sparse regression setting. It is well known that classical regression tools can be unreliable in this context because of the bias generated in the selection step. Many approaches have been proposed in recent years to ensure inferential validity. In this article we consider a simple alternative to data splitting based on randomizing the response vector, which allows for higher selection and inferential power than the former, and is applicable with an arbitrary selection rule. We perform a theoretical and empirical comparison of the two methods and derive a central limit theorem for the randomization approach. Our investigations show that the gain in power can be substantial.
ISSN:0006-3444
1464-3510
DOI:10.1093/biomet/asac070