On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)

Uloženo v:
Podrobná bibliografie
Název: On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
Autoři: Gauriat, Charles-Maxime, Pencolé, Yannick, Ribot, Pauline, Brouillet, Gregory
Přispěvatelé: Charles-Maxime Gauriat and Yannick Pencolé and Pauline Ribot and Gregory Brouillet, Pencolé, Yannick
Informace o vydavateli: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024.
Rok vydání: 2024
Témata: [INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], XAI, degradation diagnosis, Computing methodologies → Machine learning, multiclass supervised learning, Interpretability, ddc:004
Popis: In an industrial maintenance context, degradation diagnosis is the problem of determining the current level of degradation of operating machines based on measurements. With the emergence of Machine Learning techniques, such a problem can now be solved by training a degradation model offline and by using it online. While such models are more and more accurate and performant, they are often black-box and their decisions are therefore not interpretable for human maintenance operators. On the contrary, interpretable ML models are able to provide explanations for the model’s decisions and consequently improves the confidence of the human operator about the maintenance decision based on these models. This paper proposes a new method to quantitatively measure the interpretability of such models that is agnostic (no assumption about the class of models) and that is applied on degradation models. The proposed method requires that the decision maker sets up some high level parameters in order to measure the interpretability of the models and then can decide whether the obtained models are satisfactory or not. The method is formally defined and is fully illustrated on a decision tree degradation model and a model trained with a recent neural network architecture called Multiclass Neural Additive Model.
Druh dokumentu: Conference object
Article
Popis souboru: application/pdf
Jazyk: English
DOI: 10.4230/oasics.dx.2024.27
Přístupová URL adresa: https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2024.27
https://laas.hal.science/hal-04862630v1
https://laas.hal.science/hal-04862630v1/document
https://doi.org/10.4230/oasics.dx.2024.27
Rights: CC BY
Přístupové číslo: edsair.dedup.wf.002..f0fb575ef1d33930e5b4762d042de93d
Databáze: OpenAIRE
Popis
Abstrakt:In an industrial maintenance context, degradation diagnosis is the problem of determining the current level of degradation of operating machines based on measurements. With the emergence of Machine Learning techniques, such a problem can now be solved by training a degradation model offline and by using it online. While such models are more and more accurate and performant, they are often black-box and their decisions are therefore not interpretable for human maintenance operators. On the contrary, interpretable ML models are able to provide explanations for the model’s decisions and consequently improves the confidence of the human operator about the maintenance decision based on these models. This paper proposes a new method to quantitatively measure the interpretability of such models that is agnostic (no assumption about the class of models) and that is applied on degradation models. The proposed method requires that the decision maker sets up some high level parameters in order to measure the interpretability of the models and then can decide whether the obtained models are satisfactory or not. The method is formally defined and is fully illustrated on a decision tree degradation model and a model trained with a recent neural network architecture called Multiclass Neural Additive Model.
DOI:10.4230/oasics.dx.2024.27