Knowledge and Model-Driven Deep Reinforcement Learning for Federated Edge Learning

Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data ar...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE Global Communications Conference (Online) s. 3926 - 3931
Hlavní autoři: Li, Yangchen, Zhao, Lingzhi, Yang, Feng
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 08.12.2024
Témata:
ISSN:2576-6813
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data are non-independent and identically distributed (non-IID). We formulate a joint worker selection and FL algorithm parameter configuration problem to minimize the final test loss under time and energy constraints. The corresponding problem poses challenges of implicit objective, dimension-varying variables, and dynamic parameters. To tackle these issues, we transform the primal problem into a Markov decision process (MDP), using insights of FL algorithm convergence analysis, which enables the application of deep reinforcement learning (DRL) to capture system dynamics effectively. We propose a novel joint Knowledge/Model-Driven DRL (KMD-DRL) solution to address challenges arising from the MDP problem, including mixed discrete-continuous actions and large action space. Numerical results demonstrate the effectiveness and advantages of KMD-DRL in enhancing FEL efficiency.
ISSN:2576-6813
DOI:10.1109/GLOBECOM52923.2024.10901314