Towards sustainable smart cities: Workflow scheduling in cloud of health things (CoHT) using deep reinforcement learning and moth flame optimization for edge–cloud systems.

Uloženo v:
Podrobná bibliografie
Název: Towards sustainable smart cities: Workflow scheduling in cloud of health things (CoHT) using deep reinforcement learning and moth flame optimization for edge–cloud systems.
Autoři: Khaleel, Mustafa Ibrahim1 (AUTHOR)
Zdroj: Future Generation Computer Systems. Sep2025, Vol. 170, pN.PAG-N.PAG. 1p.
Témata: DEEP reinforcement learning, REINFORCEMENT learning, SMART cities, SUSTAINABLE urban development, EDGE computing
Abstrakt: In smart cities, the Cloud of Health Things (CoHT) enhances service delivery and optimizes task scheduling and allocation. As CoHT systems proliferate and offer a range of services with varying Quality of Service (QoS) demands, servers face the challenge of efficiently distributing limited virtual machines across internet-based applications. This can strain performance, particularly for latency-sensitive healthcare applications, resulting in increased delays. Edge computing mitigates this issue by bringing computational, storage, and network resources closer to the data source, working in tandem with cloud computing. Combining edge and cloud computing is essential for improving efficiency, especially for IoT-driven tasks where reliability and low latency are vital concerns. This paper introduces an intelligent task scheduling and allocation model that leverages the Moth Flame Optimization (MFO) algorithm, integrated with deep reinforcement learning (DRL), to optimize edge–cloud computing in sustainable smart cities. The model utilizes a bi-class neural network to classify tasks, ensuring rapid convergence while delivering both local and globally optimal solutions, achieving efficient resource allocation, and enhancing QoS. The model was trained on real-world and synthesized cluster datasets, including the Google cluster dataset, to learn cloud-based job scheduling, which is then applied in real-time. Compared with DRL and non-DRL approaches, the model shows significant performance gains, with a 76.2% reduction in latency, an 81.9% increase in reliability, a 74.4% improvement in resource utilization, and an 83.1% enhancement in QoS. [ABSTRACT FROM AUTHOR]
Copyright of Future Generation Computer Systems is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Business Source Index
Popis
Abstrakt:In smart cities, the Cloud of Health Things (CoHT) enhances service delivery and optimizes task scheduling and allocation. As CoHT systems proliferate and offer a range of services with varying Quality of Service (QoS) demands, servers face the challenge of efficiently distributing limited virtual machines across internet-based applications. This can strain performance, particularly for latency-sensitive healthcare applications, resulting in increased delays. Edge computing mitigates this issue by bringing computational, storage, and network resources closer to the data source, working in tandem with cloud computing. Combining edge and cloud computing is essential for improving efficiency, especially for IoT-driven tasks where reliability and low latency are vital concerns. This paper introduces an intelligent task scheduling and allocation model that leverages the Moth Flame Optimization (MFO) algorithm, integrated with deep reinforcement learning (DRL), to optimize edge–cloud computing in sustainable smart cities. The model utilizes a bi-class neural network to classify tasks, ensuring rapid convergence while delivering both local and globally optimal solutions, achieving efficient resource allocation, and enhancing QoS. The model was trained on real-world and synthesized cluster datasets, including the Google cluster dataset, to learn cloud-based job scheduling, which is then applied in real-time. Compared with DRL and non-DRL approaches, the model shows significant performance gains, with a 76.2% reduction in latency, an 81.9% increase in reliability, a 74.4% improvement in resource utilization, and an 83.1% enhancement in QoS. [ABSTRACT FROM AUTHOR]
ISSN:0167739X
DOI:10.1016/j.future.2025.107821