Representative Learning for Distributed Learning with Heterogeneity and Asynchrony.

Saved in:
Bibliographic Details
Title: Representative Learning for Distributed Learning with Heterogeneity and Asynchrony.
Authors: Li, Keren1 (AUTHOR) li.keren.cn@gmail.com
Source: Journal of Computational & Graphical Statistics. Apr2026, p1-21. 21p. 3 Illustrations.
Subject Terms: *DISTRIBUTED computing, *STATISTICS, FEDERATED learning, DISTRIBUTED artificial intelligence, OPTIMIZATION algorithms
Abstract: AbstractRepresentative Learning (RepL) is a distributed learning framework in which nodes transmit pseudo data, called representatives, instead of model parameters or gradients. These representatives retain the original data format while encoding key statistical features, enabling them to support asynchronous communication and heterogeneous tasks. This paper introduces two new representative constructions: the Transformed Mean Representative (TMR), which generalizes the mean representative by incorporating model-specific link functions; and the Anchored Score-Matching Representative (Anchored-SMR), which modifies score-matching equations to ensure uniqueness and stability. Anchored-SMR is further extended to accommodate general smooth loss functions with optional non-smooth penalties. We analyze RepL in decentralized, asynchronous systems where gradients and models from other nodes may be delayed or misaligned. Theoretical results and extensive simulations demonstrate that the proposed representatives maintain accuracy and convergence under heterogeneity and asynchrony, offering a scalable and interpretable alternative to gradient-based distributed optimization. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Computational & Graphical Statistics is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Business Source Index
Description
Abstract:AbstractRepresentative Learning (RepL) is a distributed learning framework in which nodes transmit pseudo data, called representatives, instead of model parameters or gradients. These representatives retain the original data format while encoding key statistical features, enabling them to support asynchronous communication and heterogeneous tasks. This paper introduces two new representative constructions: the Transformed Mean Representative (TMR), which generalizes the mean representative by incorporating model-specific link functions; and the Anchored Score-Matching Representative (Anchored-SMR), which modifies score-matching equations to ensure uniqueness and stability. Anchored-SMR is further extended to accommodate general smooth loss functions with optional non-smooth penalties. We analyze RepL in decentralized, asynchronous systems where gradients and models from other nodes may be delayed or misaligned. Theoretical results and extensive simulations demonstrate that the proposed representatives maintain accuracy and convergence under heterogeneity and asynchrony, offering a scalable and interpretable alternative to gradient-based distributed optimization. [ABSTRACT FROM AUTHOR]
ISSN:10618600
DOI:10.1080/10618600.2026.2652975