Joint Task Partitioning and Resource Allocation in RAV-Enabled Vehicular Edge Computing Based on Deep Reinforcement Learning

Vehicle edge computing (VEC) leverages compact cloud computing at the mobile network edge to meet the processing and latency needs of vehicles. By bringing computation closer to the vehicles, VEC reduces data transmission, minimizes latency, and boosts performance for compute-intensive applications....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal Jg. 12; H. 11; S. 15453 - 15466
Hauptverfasser: Liang, Hongbin, Zhang, Han, Ale, Laha, Hong, Xintao, Wang, Lei, Jia, Qiong, Zhao, Dongmei
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 01.06.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2327-4662, 2327-4662
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vehicle edge computing (VEC) leverages compact cloud computing at the mobile network edge to meet the processing and latency needs of vehicles. By bringing computation closer to the vehicles, VEC reduces data transmission, minimizes latency, and boosts performance for compute-intensive applications. However, during peak hours of urban road traffic, the scarce computational resources available at edge servers could pose challenges in fulfilling the processing needs of vehicles. Introducing remote aerial vehicles (RAVs) as supplementary edge computing nodes could significantly mitigate the aforementioned issue. In this article, we propose a flexible edge computing framework in which a fleet of RAVs function as mobile computational service providers, offering computation offloading services to multiple vehicles. We design and optimize a computation offloading model for the RAV-enabled VEC environment. The proposed model tackles the task offloading challenge, aiming to optimize RAV revenue and task processing efficiency while considering the constraints of RAVs' restricted computational power and energy resources. Toward this end, our model jointly considers two key factors: 1) task partitioning and 2) computational resource allocation. To tackle the challenges posed by the aforementioned nonconvex optimization problem, we construct a Markov decision process (MDP) model for the multi-RAV-enabled mobile edge computing system and introduce an innovative multiagent deep reinforcement learning (MADRL) framework addressing the decision-making challenge represented by MDP model. Comprehensive simulation outcomes illustrate that our devised task offloading technique outperforms other optimization methods.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2025.3527929