Priority-Based Scheduling and Orchestration in Edge-Cloud Computing: A Deep Reinforcement Learning-Enhanced Concurrency Control Approach

Gespeichert in:
Bibliographische Detailangaben
Titel: Priority-Based Scheduling and Orchestration in Edge-Cloud Computing: A Deep Reinforcement Learning-Enhanced Concurrency Control Approach
Autoren: Mohammad A Al Khaldy, Ahmad Nabot, Ahmad Al-Qerem, Mohammad Alauthman, Amina Salhi, Suhaila Abuowaida, Naceur Chihaoui
Quelle: Computer Modeling in Engineering & Sciences ; ISSN: 1526-1492 (Print) ; ISSN: 1526-1506 (Online) ; Volume 145 ; Issue 1
Verlagsinformationen: Tech Science Press
Publikationsjahr: 2025
Schlagwörter: Edge computing, cloud computing, scheduling algorithms, orchestration strategies, deep reinforcement learning, concurrency control, real-time systems, IoT
Beschreibung: The exponential growth of Internet of Things (IoT) devices has created unprecedented challenges in data processing and resource management for time-critical applications. Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems, while pure edge computing faces resource constraints that limit processing capabilities. This paper addresses these challenges by proposing a novel Deep Reinforcement Learning (DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments. Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency. The framework introduces three key innovations: (1) a DRL-based dynamic priority assignment mechanism that learns from system behavior, (2) a hybrid concurrency control protocol combining local edge validation with global cloud coordination, and (3) an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures. Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements: 40% latency reduction, 25% throughput increase, 85% resource utilization (compared to 60% for heuristic methods), 40% reduction in energy consumption (300 vs. 500 J per task), and 50% improvement in scalability factor (1.8 vs. 1.2 for EDF) compared to state-of-the-art heuristic and meta-heuristic approaches. These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
Publikationsart: article in journal/newspaper
Dateibeschreibung: application/pdf
Sprache: English
Relation: https://doi.org/10.32604/cmes.2025.070004
DOI: 10.32604/cmes.2025.070004
Verfügbarkeit: https://doi.org/10.32604/cmes.2025.070004
Rights: info:eu-repo/semantics/openAccess ; https://creativecommons.org/licenses/by/4.0/
Dokumentencode: edsbas.EFF974D0
Datenbank: BASE
Beschreibung
Abstract:The exponential growth of Internet of Things (IoT) devices has created unprecedented challenges in data processing and resource management for time-critical applications. Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems, while pure edge computing faces resource constraints that limit processing capabilities. This paper addresses these challenges by proposing a novel Deep Reinforcement Learning (DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments. Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency. The framework introduces three key innovations: (1) a DRL-based dynamic priority assignment mechanism that learns from system behavior, (2) a hybrid concurrency control protocol combining local edge validation with global cloud coordination, and (3) an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures. Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements: 40% latency reduction, 25% throughput increase, 85% resource utilization (compared to 60% for heuristic methods), 40% reduction in energy consumption (300 vs. 500 J per task), and 50% improvement in scalability factor (1.8 vs. 1.2 for EDF) compared to state-of-the-art heuristic and meta-heuristic approaches. These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
DOI:10.32604/cmes.2025.070004