Knowledge and Model-Driven Deep Reinforcement Learning for Federated Edge Learning

Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data ar...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE Global Communications Conference (Online) s. 3926 - 3931
Hlavní autori: Li, Yangchen, Zhao, Lingzhi, Yang, Feng
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 08.12.2024
Predmet:
ISSN:2576-6813
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data are non-independent and identically distributed (non-IID). We formulate a joint worker selection and FL algorithm parameter configuration problem to minimize the final test loss under time and energy constraints. The corresponding problem poses challenges of implicit objective, dimension-varying variables, and dynamic parameters. To tackle these issues, we transform the primal problem into a Markov decision process (MDP), using insights of FL algorithm convergence analysis, which enables the application of deep reinforcement learning (DRL) to capture system dynamics effectively. We propose a novel joint Knowledge/Model-Driven DRL (KMD-DRL) solution to address challenges arising from the MDP problem, including mixed discrete-continuous actions and large action space. Numerical results demonstrate the effectiveness and advantages of KMD-DRL in enhancing FEL efficiency.
ISSN:2576-6813
DOI:10.1109/GLOBECOM52923.2024.10901314