Hierarchical Federated ADMM

Saved in:
Bibliographic Details
Title: Hierarchical Federated ADMM
Authors: Azimi Abarghouyi, Seyed Mohammad, Bastianello, Nicola, Johansson, Karl H., 1967, Fodor, Viktória
Source: IEEE NETWORKING LETTERS. 7(1):11-15
Subject Terms: Servers, Convex functions, Optimization, Linear programming, Privacy, Vectors, Training, Federated learning, Computational modeling, Accuracy, Machine learning, distributed optimization, ADMM, hierarchical networks
Description: In this letter, we depart from the widely-used gradient descent-based hierarchical federated learning (FL) algorithms to develop a novel hierarchical FL framework based on the alternating direction method of multipliers (ADMM), leveraging a network architecture consisting of a single cloud server and multiple edge servers, where each edge server is dedicated to a specific client set. Within this framework, we propose two novel FL algorithms, which both use ADMM in the top layer: one that employs ADMM in the lower layer and another that uses the conventional gradient descent-based approach. The proposed framework enhances privacy, and experiments demonstrate the superiority of the proposed algorithms compared to the conventional algorithms in terms of learning convergence and accuracy. Additionally, gradient descent on the lower layer performs well even if the number of local steps is very limited, while ADMM on both layers lead to better performance otherwise.
File Description: print
Access URL: https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-373461
https://doi.org/10.1109/lnet.2025.3527161
Database: SwePub
Description
Abstract:In this letter, we depart from the widely-used gradient descent-based hierarchical federated learning (FL) algorithms to develop a novel hierarchical FL framework based on the alternating direction method of multipliers (ADMM), leveraging a network architecture consisting of a single cloud server and multiple edge servers, where each edge server is dedicated to a specific client set. Within this framework, we propose two novel FL algorithms, which both use ADMM in the top layer: one that employs ADMM in the lower layer and another that uses the conventional gradient descent-based approach. The proposed framework enhances privacy, and experiments demonstrate the superiority of the proposed algorithms compared to the conventional algorithms in terms of learning convergence and accuracy. Additionally, gradient descent on the lower layer performs well even if the number of local steps is very limited, while ADMM on both layers lead to better performance otherwise.
ISSN:25763156
DOI:10.1109/lnet.2025.3527161