Dynamic Multi-user Computation Offloading for Mobile Edge Computing using Game Theory and Deep Reinforcement Learning
Mobile edge computing (MEC) has appeared as a promising solution to fill the gap between the growing computationally intensive applications and limited computation capability of mobile devices by providing powerful computing services at the edge of the wireless access network. To use the services pr...
Gespeichert in:
| Veröffentlicht in: | IEEE International Conference on Communications (2003) S. 1930 - 1935 |
|---|---|
| Hauptverfasser: | , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
16.05.2022
|
| Schlagworte: | |
| ISSN: | 1938-1883 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Mobile edge computing (MEC) has appeared as a promising solution to fill the gap between the growing computationally intensive applications and limited computation capability of mobile devices by providing powerful computing services at the edge of the wireless access network. To use the services provided by the MEC more effectively, making efficient and reasonable offloading decisions is crucial. In this paper, we study the computation offloading of tasks from multiple users to a single-cell edge server under a dynamic environment. We consider a practical case wherein a group of mobile users with random mobility patterns use a common set of time-varying stochastic transmission channels to perform computation offloading, and the number of active users in the system randomly changes. To reduce the mutual interference among users when accessing the wireless channels, we adopt game theory to formulate the users' computation offloading decision process as a stochastic game model. Next, we prove the existence of the Nash Equilibrium (NE) for the proposed game model by showing its equivalency to a weighted potential game which has at least one pure-strategy NE point. Then, we present distributed computation offloading algorithms by adopting a payoff-based multi-agent reinforcement learning (MARL) approach to reach the NE of the game. Finally, through simulation, we validate the effectiveness of the proposed algorithms by comparing them with the results obtained from other previously studied multi-agent learning algorithms as well as conventional Q-learning and deep Q-learning algorithms. |
|---|---|
| ISSN: | 1938-1883 |
| DOI: | 10.1109/ICC45855.2022.9838691 |