Dynamic task offloading for Internet of Things in mobile edge computing via deep reinforcement learning.

Gespeichert in:
Bibliographische Detailangaben
Titel: Dynamic task offloading for Internet of Things in mobile edge computing via deep reinforcement learning.
Autoren: Chen, Ying, Gu, Wei, Li, Kaixin
Quelle: International Journal of Communication Systems; 11/20/2025, Vol. 38 Issue 17, p1-16, 16p
Schlagwörter: INTERNET of things, EDGE computing, ALGORITHMS, REINFORCEMENT learning, ELECTRIC power consumption, MARKOV processes
Abstract: Summary: With the development of Internet of Things (IoT), more and more computation‐intensive tasks are generated by IoT devices. Due to the limitation of battery and computing capacity of IoT devices, these tasks can be offloaded to mobile edge computing (MEC) and cloud for processing. However, as the channel states and task generation process are dynamic, and the scales of task offloading problem and solution space size are increasing rapidly, the collaborative task offloading for MEC and cloud faces severe challenges. In this paper, we integrate the two conflicting offloading goals, which are maximizing the task finish ratio with tolerable delay and minimizing the power consumption of devices. We formulate the task offloading problem to balance the two conflicting goals. Then, we reformulate it as an MDP‐based dynamic task offloading problem. We design a deep reinforcement learning (DRL)‐based dynamic task offloading (DDTO) algorithm to solve this problem. Our DDTO algorithm can adapt to the dynamic and complex environment and adjust the task offloading strategies accordingly. Experiments are also carried out which show that our DDTO algorithm can converge quickly. The experiment results also validate the effectiveness and efficacy of our DDTO algorithm in balancing finish ratio and power. [ABSTRACT FROM AUTHOR]
Copyright of International Journal of Communication Systems is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Complementary Index
Beschreibung
Abstract:Summary: With the development of Internet of Things (IoT), more and more computation‐intensive tasks are generated by IoT devices. Due to the limitation of battery and computing capacity of IoT devices, these tasks can be offloaded to mobile edge computing (MEC) and cloud for processing. However, as the channel states and task generation process are dynamic, and the scales of task offloading problem and solution space size are increasing rapidly, the collaborative task offloading for MEC and cloud faces severe challenges. In this paper, we integrate the two conflicting offloading goals, which are maximizing the task finish ratio with tolerable delay and minimizing the power consumption of devices. We formulate the task offloading problem to balance the two conflicting goals. Then, we reformulate it as an MDP‐based dynamic task offloading problem. We design a deep reinforcement learning (DRL)‐based dynamic task offloading (DDTO) algorithm to solve this problem. Our DDTO algorithm can adapt to the dynamic and complex environment and adjust the task offloading strategies accordingly. Experiments are also carried out which show that our DDTO algorithm can converge quickly. The experiment results also validate the effectiveness and efficacy of our DDTO algorithm in balancing finish ratio and power. [ABSTRACT FROM AUTHOR]
ISSN:10745351
DOI:10.1002/dac.5154