Explanation-Driven Self-Adaptation Using Model-Agnostic Interpretable Machine Learning

Self-adaptive systems increasingly rely on black-box predictive models (e.g., Neural Networks) to make decisions and steer adaptations. The lack of transparency of these models makes it hard to explain adaptation decisions and their possible effects on the surrounding environment. Furthermore, adapt...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems (Online) s. 189 - 199
Hlavní autoři: Negri, Francesco Renato, Nicolosi, Niccolo, Camilli, Matteo, Mirandola, Raffaela
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 15.04.2024
Témata:
ISSN:2157-2321
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Self-adaptive systems increasingly rely on black-box predictive models (e.g., Neural Networks) to make decisions and steer adaptations. The lack of transparency of these models makes it hard to explain adaptation decisions and their possible effects on the surrounding environment. Furthermore, adaptation decisions in this context are typically the outcome of expensive optimization processes. The complexity arises from the inability to directly observe or comprehend the internal mechanisms of the black-box predictive models, which requires employing iterative methods to explore a possibly large search space and optimize according to many goals. Here, balancing the trade-off between effectiveness and cost becomes a crucial challenge. In this paper, we propose explanation-driven self-adaptation, a novel approach that embeds model-agnostic interpretable machine learning techniques into the feedback loop to enhance the transparency of the predictive models and gain insights that help drive adaptation decisions effectively by significantly reducing the cost of planning them. Our empirical evaluation demonstrates the cost-effectiveness of our approach using two evaluation subjects in the robotics domain.
ISSN:2157-2321
DOI:10.1145/3643915.3644085