LLRS : A Low Latency Resource Scheduling in Cloud Continuum Orchestration

The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in r...

Full description

Saved in:
Bibliographic Details
Published in:International Conference on Intelligence in Next Generation Networks pp. 81 - 87
Main Authors: Mukuhi, David Kule, Outtagarts, Abdelkader
Format: Conference Proceeding
Language:English
Published: IEEE 11.03.2024
Subjects:
ISSN:2472-8144
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The convergence of Network Function Virtualization (NFV) and Cloud Computing marks a transformative milestone in the telecommunications domain, setting the stage for the realization of 5G/6G flagship technologies. Containers, as a rising virtualization solution, have gained significant traction in recent years due to their attributes like shared host OS, swift launch times, and portability. A central challenge in this landscape is the effective orchestration of resources within cloud continuum infrastructure, ensuring seamless service operation. The abstraction of network services through NFV and the concept of network slicing amplify challenges in resource allocation. Kubernetes and Docker Swarm have emerged as powerful orchestration platforms. However, while widely adopted in application deployment, these solutions must meet the distinctive quality-of-service (QoS) demands of the telecommunications industry like Ultra-reliable low-latency communication. This paper proposes a Low Latency Resource Scheduler (LLRS) for Cloud Continuum. This new scheduling architecture approach leverages in-memory data grid and distributed resource predictions to reduce scheduling delay. Our experiments on heterogeneous environment has shown a reduction of nearly half the time required to schedule container in docker swarm and Kubernetes.
ISSN:2472-8144
DOI:10.1109/ICIN60470.2024.10494439