LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration

Saved in:
Bibliographic Details
Title: LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration
Authors: Mukuhi, David Kule, Outtagarts, Abdelkader
Contributors: kule Mukuhi, david
Source: 2024 27th Conference on Innovation in Clouds, Internet and Networks (ICIN). :81-87
Publisher Information: IEEE, 2024.
Publication Year: 2024
Subject Terms: machine learning, low latency scheduling, Distributed Orchestration, [INFO] Computer Science [cs], Distributed Orchestration In-memory database low latency scheduling machine learning, In-memory database
Description: The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages inmemory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes
Document Type: Article
Conference object
File Description: application/pdf
DOI: 10.1109/icin60470.2024.10494439
Access URL: https://hal.science/hal-04995146v1
https://hal.science/hal-04995146v1/document
https://doi.org/10.1109/icin60470.2024.10494439
Rights: STM Policy #29
Accession Number: edsair.doi.dedup.....3c64d00a1cfd8b9cd421c7d5755cc74f
Database: OpenAIRE
Description
Abstract:The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages inmemory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes
DOI:10.1109/icin60470.2024.10494439