Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network

Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE Global Communications Conference (Online) s. 1 - 7
Hlavní autoři: Terra, Ahmad, Inam, Rafia, Baskaran, Sandhya, Batista, Pedro, Burdick, Ian, Fersman, Elena
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.12.2020
Témata:
ISSN:2576-6813
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.
ISSN:2576-6813
DOI:10.1109/GLOBECOM42002.2020.9322496