LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration

Gespeichert in:
Bibliographische Detailangaben
Titel: LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration
Autoren: Mukuhi, David Kule, Outtagarts, Abdelkader
Weitere Verfasser: kule Mukuhi, david
Quelle: 2024 27th Conference on Innovation in Clouds, Internet and Networks (ICIN). :81-87
Verlagsinformationen: IEEE, 2024.
Publikationsjahr: 2024
Schlagwörter: machine learning, low latency scheduling, Distributed Orchestration, [INFO] Computer Science [cs], Distributed Orchestration In-memory database low latency scheduling machine learning, In-memory database
Beschreibung: The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages inmemory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes
Publikationsart: Article
Conference object
Dateibeschreibung: application/pdf
DOI: 10.1109/icin60470.2024.10494439
Zugangs-URL: https://hal.science/hal-04995146v1
https://hal.science/hal-04995146v1/document
https://doi.org/10.1109/icin60470.2024.10494439
Rights: STM Policy #29
Dokumentencode: edsair.doi.dedup.....3c64d00a1cfd8b9cd421c7d5755cc74f
Datenbank: OpenAIRE
Beschreibung
Abstract:The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages inmemory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes
DOI:10.1109/icin60470.2024.10494439