Design and Evaluation of Explainable BDI Agents

It is widely acknowledged that providing explanations is an important capability of intelligent systems. Explanation capabilities are useful, for example, in scenario-based training systems with intelligent virtual agents. Trainees learn more from scenario-based training when they understand why the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Ročník 2; s. 125 - 132
Hlavní autoři: Harbers, M, van den Bosch, K, Meyer, J
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.08.2010
Témata:
ISBN:9781424484829, 1424484820
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:It is widely acknowledged that providing explanations is an important capability of intelligent systems. Explanation capabilities are useful, for example, in scenario-based training systems with intelligent virtual agents. Trainees learn more from scenario-based training when they understand why the virtual agents act the way they do. In this paper, we present a model for explainable BDI agents which enables the explanation of BDI agent behavior in terms of underlying beliefs and goals. Different explanation algorithms can be specified in the model, generating different types of explanations. In a user study (n=20), we compare four explanation algorithms by asking trainees which explanations they consider most useful. Based on the results, we discuss which explanation types should be given under what conditions.
ISBN:9781424484829
1424484820
DOI:10.1109/WI-IAT.2010.115