Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network

Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE Global Communications Conference (Online) S. 1 - 7
Hauptverfasser: Terra, Ahmad, Inam, Rafia, Baskaran, Sandhya, Batista, Pedro, Burdick, Ian, Fersman, Elena
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.12.2020
Schlagworte:
ISSN:2576-6813
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.
ISSN:2576-6813
DOI:10.1109/GLOBECOM42002.2020.9322496