Semi-Asynchronous Model Design for Federated Learning in Mobile Edge Networks
Federated learning (FL) is a distributed machine learning (ML). Distributed clients train locally and exclusively need to upload the model parameters to learn the global model collaboratively under the coordination of the aggregation server. Although the privacy of the clients is protected, which re...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on vehicular technology Jg. 72; H. 12; S. 1 - 14 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0018-9545, 1939-9359 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Federated learning (FL) is a distributed machine learning (ML). Distributed clients train locally and exclusively need to upload the model parameters to learn the global model collaboratively under the coordination of the aggregation server. Although the privacy of the clients is protected, which requires multiple rounds of data upload between the clients and the server to ensure the accuracy of the global model. Inevitably, this results in latency and energy consumption issues due to limited communication resources. Therefore, mobile edge computing (MEC) has been proposed to solve communication delays and energy consumption in federated learning. In this paper, we first analyze how to select the gradient values that help the global model converge quickly and establish theoretical analysis about the relationship between the convergence rate and the gradient direction. To efficiently reduce the energy consumption of clients during training, on the premise of ensuring the local training accuracy and the convergence rate of the global model, we adopt the deep deterministic policy gradient (DDPG) algorithm, which adaptively allocates resources according to different clients' requests to minimize the energy consumption. To improve flexibility and scalability, we propose a new semi-asynchronous federated update model, which allows clients to aggregate asynchronously on the server, and accelerates the convergence rate of the global model. Empirical results show that the proposed Algorithm 1 not only accelerates the convergence speed of the global model, but also reduces the size of parameters that need to be uploaded. Besides, the proposed Algorithm 2 reduces the time difference caused by users heterogeneity. Eventually, semi-asynchronous update model is better than synchronous update model in communication time. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0018-9545 1939-9359 |
| DOI: | 10.1109/TVT.2023.3298787 |