Distributed algorithm for best subset regression

High-dimensional massive data modeling faces critical challenges in computational efficiency, memory constraints, and privacy protection. We develop a distributed framework for best subset regression with convex twice-differentiable losses (e.g., linear, multiplicative, and logistic regression). The...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 277; p. 127224
Main Authors: Ming, Hao, Yang, Hu
Format: Journal Article
Language:English
Published: Elsevier Ltd 05.06.2025
Subjects:
ISSN:0957-4174
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:High-dimensional massive data modeling faces critical challenges in computational efficiency, memory constraints, and privacy protection. We develop a distributed framework for best subset regression with convex twice-differentiable losses (e.g., linear, multiplicative, and logistic regression). The proposed distributed enhanced primal–dual active set (DEPDAS) algorithm employs enhanced distributed computing to efficiently approximate optimal solutions in low-dimensional parameter spaces. Under standard regularity conditions, DEPDAS preserves the statistical properties of the full-sample-based EPDAS algorithm, including optimal estimation error rates and Oracle properties. With a per-iteration communication cost of O(2T+2p) for DEPDAS, our master-machine initialization strategy accelerates convergence while reducing communication overhead. Furthermore, we derive a lower communication DEPDAS (LCDEPDAS) variant with O(4T) per-iteration cost. Extensive simulations and empirical studies demonstrate the superiority of both algorithms over state-of-the-art methods in estimation accuracy and prediction performance.
ISSN:0957-4174
DOI:10.1016/j.eswa.2025.127224