LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration

The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in r...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International Conference on Intelligence in Next Generation Networks s. 81 - 87
Hlavní autori: Mukuhi, David Kule, Outtagarts, Abdelkader
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 11.03.2024
Predmet:
ISSN:2472-8144
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages in-memory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes.
ISSN:2472-8144
DOI:10.1109/ICIN60470.2024.10494439