Knowledge and Model-Driven Deep Reinforcement Learning for Federated Edge Learning
Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data ar...
Saved in:
| Published in: | IEEE Global Communications Conference (Online) pp. 3926 - 3931 |
|---|---|
| Main Authors: | , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
08.12.2024
|
| Subjects: | |
| ISSN: | 2576-6813 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Federated edge learning (FEL) integrates federated learning (FL) into edge computing systems to improve communication efficiency and data privacy. We investigate a practical FEL system, where the computing and communication resources are dynamic and heterogeneous among workers, and the local data are non-independent and identically distributed (non-IID). We formulate a joint worker selection and FL algorithm parameter configuration problem to minimize the final test loss under time and energy constraints. The corresponding problem poses challenges of implicit objective, dimension-varying variables, and dynamic parameters. To tackle these issues, we transform the primal problem into a Markov decision process (MDP), using insights of FL algorithm convergence analysis, which enables the application of deep reinforcement learning (DRL) to capture system dynamics effectively. We propose a novel joint Knowledge/Model-Driven DRL (KMD-DRL) solution to address challenges arising from the MDP problem, including mixed discrete-continuous actions and large action space. Numerical results demonstrate the effectiveness and advantages of KMD-DRL in enhancing FEL efficiency. |
|---|---|
| ISSN: | 2576-6813 |
| DOI: | 10.1109/GLOBECOM52923.2024.10901314 |