Explainable Human-Machine Teaming using Model Checking and Interpretable Machine Learning

The human-machine teaming paradigm promotes tight teamwork between humans and autonomous machines that collaborate in the same physical space. This paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. These systems are expected to build a certain level...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:FME Workshop on Formal Methods in Software Engineering (Online) s. 18 - 28
Hlavní autoři: Bersani, Marcello M., Camilli, Matteo, Lestingi, Livia, Mirandola, Raffaela, Rossi, Matteo
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.05.2023
Témata:
ISSN:2575-5099
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The human-machine teaming paradigm promotes tight teamwork between humans and autonomous machines that collaborate in the same physical space. This paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. These systems are expected to build a certain level of trust by enforcing dependability and exhibiting interpretable behavior. However, trustworthiness is negatively affected by the black-box nature of these systems, which typically make fully autonomous decisions that may be confusing for humans or cause hazards in critical domains. We present the EASE approach, whose goal is to build better trust in human-machine teaming leveraging statistical model checking and model-agnostic interpretable machine learning. We illustrate EASE through an example in healthcare featuring an infinite (dense) space of human-machine uncertain factors, such as diverse physical and physiological characteristics of the agents involved in the teamwork. Our evaluation demonstrates the suitability and cost-effectiveness of EASE in explaining dependability properties in human-machine teaming.
ISSN:2575-5099
DOI:10.1109/FormaliSE58978.2023.00010