Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network

Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of a...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE Global Communications Conference (Online) s. 1 - 7
Hlavní autori: Terra, Ahmad, Inam, Rafia, Baskaran, Sandhya, Batista, Pedro, Burdick, Ian, Fersman, Elena
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.12.2020
Predmet:
ISSN:2576-6813
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.
ISSN:2576-6813
DOI:10.1109/GLOBECOM42002.2020.9322496