End-to-end action model learning from demonstration in collaborative robotics

Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Progr...

Full description

Saved in:
Bibliographic Details
Published in:Robotics and autonomous systems Vol. 193; p. 105071
Main Author: Zanchettin, Andrea Maria
Format: Journal Article
Language:English
Published: Elsevier B.V 01.11.2025
Subjects:
ISSN:0921-8890
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Programming by Demonstration approach to efficiently configure and operate a collaborative robotics workstation. While motion profiles (i.e., the how) are taught through the commonly used lead-through programming method, the conditions to check before the execution of a motion and its impact on the environment (the when and what, respectively) are automatically derived using visual feedback. Differently from related works, the present methodology does not require a pre-compiled domain knowledge to encode the semantic characterisation of a demonstrated action (i.e., preconditions and effects). An industrially-relevant use-case, consisting in a collaborative robotics assembly application, is introduced to validate the approach. Results show high success rates in interpreting and solving user-defined tasks (i.e., goals) as well as the capability of the method to generalise well in situations never seen during the acquired demonstrations. •Method for Action Model Learning in collaborative robotics.•Predicates are invented by the robotic agent based on visual feedback.•No need for (application-specific) training datasets.
ISSN:0921-8890
DOI:10.1016/j.robot.2025.105071