Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network

Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of a...

Full description

Saved in:
Bibliographic Details
Published in:IEEE Global Communications Conference (Online) pp. 1 - 7
Main Authors: Terra, Ahmad, Inam, Rafia, Baskaran, Sandhya, Batista, Pedro, Burdick, Ian, Fersman, Elena
Format: Conference Proceeding
Language:English
Published: IEEE 01.12.2020
Subjects:
ISSN:2576-6813
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.
ISSN:2576-6813
DOI:10.1109/GLOBECOM42002.2020.9322496