Online task segmentation by merging symbolic and data-driven skill recognition during kinesthetic teaching

Programming by Demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such a demonstrated task can be represented as a sequence of so-called actions or skills. This work focuse...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Robotics and autonomous systems Jg. 162; S. 104367
Hauptverfasser: Eiband, Thomas, Liebl, Johanna, Willibald, Christoph, Lee, Dongheui
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 01.04.2023
Schlagworte:
ISSN:0921-8890
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Programming by Demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such a demonstrated task can be represented as a sequence of so-called actions or skills. This work focuses on the recognition part of the task transfer. We propose a framework that recognizes skills online during a kinesthetic demonstration by means of position and force–torque (wrench) sensing. Therefore, our framework works independently of visual perception. The recognized skill sequence constitutes a task representation that lets the user intuitively understand what the robot has learned. The skill recognition algorithm combines symbolic skill segmentation, which makes use of pre- and post-conditions, and data-driven prediction, which uses support vector machines for skill classification. This combines the advantages of both techniques, which is inexpensive evaluation of symbols and usage of data-driven classification of complex observations. The framework is thus able to detect a larger variety of skills, such as manipulation and force-based skills that can be used in assembly tasks. The applicability of our framework is proven in a user study that achieves a 96% accuracy in the online skill recognition capabilities and highlights the benefits of the generated task representation in comparison to a baseline representation. The results show that the task load could be reduced, trust and explainability could be increased, and, that the users were able to debug the robot program using the generated task representation. •Task segmentation divides a demonstrated task into a sequence of skills•Symbolic skill recognition evaluates predefined pre- and postconditions•Data-driven (sub-symbolic) skill recognition uses a trained classifier•Recognition pipelines run concurrently to improve segmentation accuracy•Online segmentation immediately constructs a visual task representation•User study evaluates the approach and its online task representation
ISSN:0921-8890
DOI:10.1016/j.robot.2023.104367