End-to-end action model learning from demonstration in collaborative robotics

Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Progr...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Robotics and autonomous systems Ročník 193; s. 105071
Hlavný autor: Zanchettin, Andrea Maria
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier B.V 01.11.2025
Predmet:
ISSN:0921-8890
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Programming by Demonstration approach to efficiently configure and operate a collaborative robotics workstation. While motion profiles (i.e., the how) are taught through the commonly used lead-through programming method, the conditions to check before the execution of a motion and its impact on the environment (the when and what, respectively) are automatically derived using visual feedback. Differently from related works, the present methodology does not require a pre-compiled domain knowledge to encode the semantic characterisation of a demonstrated action (i.e., preconditions and effects). An industrially-relevant use-case, consisting in a collaborative robotics assembly application, is introduced to validate the approach. Results show high success rates in interpreting and solving user-defined tasks (i.e., goals) as well as the capability of the method to generalise well in situations never seen during the acquired demonstrations. •Method for Action Model Learning in collaborative robotics.•Predicates are invented by the robotic agent based on visual feedback.•No need for (application-specific) training datasets.
ISSN:0921-8890
DOI:10.1016/j.robot.2025.105071