Clustered Federated Multi-Task Learning: A Communication-and-Computation Efficient Sparse Sharing Approach

Federated multi-task learning (FMTL) is a promising technology to tackle one of the most severe non-independent and identically distributed (non-IID) data challenge in federated learning (FL), which treats each client as a single task and learns personalized models by exploiting task correlations. H...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on wireless communications Ročník 24; číslo 6; s. 4824 - 4838
Hlavní autori: Ai, Yuhan, Chen, Qimei, Zhu, Guangxu, Wen, Dingzhu, Jiang, Hao, Zeng, Jun, Li, Ming
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.06.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1536-1276, 1558-2248
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Federated multi-task learning (FMTL) is a promising technology to tackle one of the most severe non-independent and identically distributed (non-IID) data challenge in federated learning (FL), which treats each client as a single task and learns personalized models by exploiting task correlations. However, the transmission of individual task models generally results in a significant amount of communication overhead compared with global model broadcasting. Furthermore, related works mainly focus on FMTLs with default and static relationships among tasks, which obliterates the non-IID data characteristic. To address these issues, we propose a novel Clustered FMTL mechanism via Sparse Sharing (FedSS). Specifically, we introduce an iterative model pruning approach that trains customized client models to deal with the non-IID issue. Thereafter, we divide clients into different tasks according to their model similarities to promote communication efficiency. Based on clustered tasks, we introduce a sparse sharing mechanism that allows clients to share model parameters dynamically among different tasks to further boost the training performance. On the other aspect, the infertile communication resources would degrade the FMTL performance by restricting the personalized model transmissions. Hence, we first theoretically analyze the convergence performance of the proposed FedSS, which quantitatively unveils the relationship between the local model training performance and communication resources. Thereafter, we formulate a communication-and-computation efficient optimization problem via a joint sparsity ratio assignment and bandwidth allocation strategy. Closed-form expressions for the optimal sparsity ratio and bandwidth allocation are derived based on Lyapunov optimization and block coordinate update (BCU) algorithms. Numerical results illustrate that the proposed FedSS outperforms the benchmarks, and achieves an efficient communication and computation performance.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2025.3544318