Explainable Human-Machine Teaming using Model Checking and Interpretable Machine Learning
The human-machine teaming paradigm promotes tight teamwork between humans and autonomous machines that collaborate in the same physical space. This paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. These systems are expected to build a certain level...
Saved in:
| Published in: | FME Workshop on Formal Methods in Software Engineering (Online) pp. 18 - 28 |
|---|---|
| Main Authors: | , , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
01.05.2023
|
| Subjects: | |
| ISSN: | 2575-5099 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The human-machine teaming paradigm promotes tight teamwork between humans and autonomous machines that collaborate in the same physical space. This paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. These systems are expected to build a certain level of trust by enforcing dependability and exhibiting interpretable behavior. However, trustworthiness is negatively affected by the black-box nature of these systems, which typically make fully autonomous decisions that may be confusing for humans or cause hazards in critical domains. We present the EASE approach, whose goal is to build better trust in human-machine teaming leveraging statistical model checking and model-agnostic interpretable machine learning. We illustrate EASE through an example in healthcare featuring an infinite (dense) space of human-machine uncertain factors, such as diverse physical and physiological characteristics of the agents involved in the teamwork. Our evaluation demonstrates the suitability and cost-effectiveness of EASE in explaining dependability properties in human-machine teaming. |
|---|---|
| ISSN: | 2575-5099 |
| DOI: | 10.1109/FormaliSE58978.2023.00010 |