On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)

Gespeichert in:
Bibliographische Detailangaben
Titel: On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
Autoren: Gauriat, Charles-Maxime, Pencolé, Yannick, Ribot, Pauline, Brouillet, Gregory
Weitere Verfasser: Charles-Maxime Gauriat and Yannick Pencolé and Pauline Ribot and Gregory Brouillet, Pencolé, Yannick
Verlagsinformationen: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024.
Publikationsjahr: 2024
Schlagwörter: [INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], XAI, degradation diagnosis, Computing methodologies → Machine learning, multiclass supervised learning, Interpretability, ddc:004
Beschreibung: In an industrial maintenance context, degradation diagnosis is the problem of determining the current level of degradation of operating machines based on measurements. With the emergence of Machine Learning techniques, such a problem can now be solved by training a degradation model offline and by using it online. While such models are more and more accurate and performant, they are often black-box and their decisions are therefore not interpretable for human maintenance operators. On the contrary, interpretable ML models are able to provide explanations for the model’s decisions and consequently improves the confidence of the human operator about the maintenance decision based on these models. This paper proposes a new method to quantitatively measure the interpretability of such models that is agnostic (no assumption about the class of models) and that is applied on degradation models. The proposed method requires that the decision maker sets up some high level parameters in order to measure the interpretability of the models and then can decide whether the obtained models are satisfactory or not. The method is formally defined and is fully illustrated on a decision tree degradation model and a model trained with a recent neural network architecture called Multiclass Neural Additive Model.
Publikationsart: Conference object
Article
Dateibeschreibung: application/pdf
Sprache: English
DOI: 10.4230/oasics.dx.2024.27
Zugangs-URL: https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2024.27
https://laas.hal.science/hal-04862630v1
https://laas.hal.science/hal-04862630v1/document
https://doi.org/10.4230/oasics.dx.2024.27
Rights: CC BY
Dokumentencode: edsair.dedup.wf.002..f0fb575ef1d33930e5b4762d042de93d
Datenbank: OpenAIRE
Beschreibung
Abstract:In an industrial maintenance context, degradation diagnosis is the problem of determining the current level of degradation of operating machines based on measurements. With the emergence of Machine Learning techniques, such a problem can now be solved by training a degradation model offline and by using it online. While such models are more and more accurate and performant, they are often black-box and their decisions are therefore not interpretable for human maintenance operators. On the contrary, interpretable ML models are able to provide explanations for the model’s decisions and consequently improves the confidence of the human operator about the maintenance decision based on these models. This paper proposes a new method to quantitatively measure the interpretability of such models that is agnostic (no assumption about the class of models) and that is applied on degradation models. The proposed method requires that the decision maker sets up some high level parameters in order to measure the interpretability of the models and then can decide whether the obtained models are satisfactory or not. The method is formally defined and is fully illustrated on a decision tree degradation model and a model trained with a recent neural network architecture called Multiclass Neural Additive Model.
DOI:10.4230/oasics.dx.2024.27