Convergence Analysis and Latency Minimization for Retransmission-Based Semi-Federated Learning

In this paper, we propose a semi-federated learning (SemiFL) framework to ameliorate the performance of conventional federated learning. The base station and devices are coordinated to collaboratively train a shared model. However, due to the rapidly fluctuating channels and irrationally assigned lo...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE Global Communications Conference (Online) s. 2057 - 2062
Hlavní autoři: Zheng, Jingheng, Ni, Wanli, Tian, Hui, Jiang, Wenchao, Quek, Tony Q. S.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 04.12.2023
Témata:
ISSN:2576-6813
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, we propose a semi-federated learning (SemiFL) framework to ameliorate the performance of conventional federated learning. The base station and devices are coordinated to collaboratively train a shared model. However, due to the rapidly fluctuating channels and irrationally assigned local learning workloads, SemiFL encounters excessive latency. To overcome the challenges, we propose a retransmission-based over-the-air computation mechanism to facilitate model aggregation and data mixing over quasi-static channels. The closed-form probability of successful aggregation is derived, while the communication latency is modeled based on the Pascal distribution. Further, we establish an optimality gap to characterize the convergence performance of SemiFL, wherein the minimum number of iterations for attaining a specific local target accuracy is identified. Next, a joint resource allocation and local target accuracy assignment problem is formulated to minimize the latency of each round, subject to the decay rate, central processing unit (CPU) frequency, and transmit power. To address this non-convex problem, we develop an algorithm using the closed-form solutions for the normalizing factors and CPU frequencies. Simulation results on two real-world datasets confirm the superiority of SemiFL over benchmarks in terms of latency and learning performance.
ISSN:2576-6813
DOI:10.1109/GLOBECOM54140.2023.10437598