Model tree methods for explaining deep reinforcement learning agents in real-time robotic applications

Deep reinforcement learning has shown useful in the field of robotics but the black-box nature of deep neural networks impedes the applicability of deep reinforcement learning agents for real-world tasks. This is addressed in the field of explainable artificial intelligence, by developing explanatio...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 515; s. 133 - 144
Hlavní autoři: Gjærum, Vilde B., Strümke, Inga, Løver, Jakob, Miller, Timothy, Lekkas, Anastasios M.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.01.2023
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep reinforcement learning has shown useful in the field of robotics but the black-box nature of deep neural networks impedes the applicability of deep reinforcement learning agents for real-world tasks. This is addressed in the field of explainable artificial intelligence, by developing explanation methods that aim to explain such agents to humans. Model trees as surrogate models have proven useful for producing explanations for black-box models used in real-world robotic applications, in particular, due to their capability of providing explanations in real time. In this paper, we provide an overview and analysis of available methods for building model trees for explaining deep reinforcement learning agents solving robotics tasks. We find that multiple outputs are important for the model to be able to grasp the dependencies of coupled output features, i.e.actions. Additionally, our results indicate that introducing domain knowledge via a hierarchy among the input features during the building process results in higher accuracies and a faster building process.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2022.10.014