Deep Reinforcement Learning techniques for dynamic task offloading in the 5G edge-cloud continuum

The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not crit...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of cloud computing : advances, systems and applications Ročník 13; číslo 1; s. 94 - 24
Hlavní autori: Nieto, Gorka, de la Iglesia, Idoia, Lopez-Novoa, Unai, Perfecto, Cristina
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2024
Springer Nature B.V
SpringerOpen
Predmet:
ISSN:2192-113X, 2192-113X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not critical, but it is not useful when latency matters, so Multi-access Edge Computing (MEC) can be of use. In this work, we propose a distributed Deep Reinforcement Learning (DRL) tool to optimize the binary task offloading decision, this is, the independent decision of where to execute each computing task, depending on many factors. The optimization goal in this work is to maximize the Quality-of-Experience (QoE) when performing tasks, which is defined as a metric related to the battery level of the UE, but subject to satisfying tasks’ latency requirements. This distributed DRL approach, specifically an Actor-Critic (AC) algorithm running on each User Equipment (UE), is evaluated through the simulation of two distinct scenarios and outperforms other analyzed baselines in terms of QoE values and/or energy consumption in dynamic environments, also demonstrating that decisions need to be adapted to the environment’s evolution.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2192-113X
2192-113X
DOI:10.1186/s13677-024-00658-0