A stable method for task priority adaptation in quadratic programming via reinforcement learning

In emerging manufacturing facilities, robots must enhance their flexibility. They are expected to perform complex jobs, showing different behaviors on the need, all within unstructured environments, and without requiring reprogramming or setup adjustments. To address this challenge, we introduce the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Robotics and computer-integrated manufacturing Ročník 91; s. 102857
Hlavní autoři: Testa, Andrea, Laghi, Marco, Bianco, Edoardo Del, Raiola, Gennaro, Hoffman, Enrico Mingo, Ajoudani, Arash
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.02.2025
Elsevier
Témata:
ISSN:0736-5845, 1879-2537
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In emerging manufacturing facilities, robots must enhance their flexibility. They are expected to perform complex jobs, showing different behaviors on the need, all within unstructured environments, and without requiring reprogramming or setup adjustments. To address this challenge, we introduce the A3CQP, a non-strict hierarchical Quadratic Programming (QP) controller. It seamlessly combines both motion and interaction functionalities, with priorities dynamically and autonomously adapted through a Reinforcement Learning-based adaptation module. This module utilizes the Asynchronous Advantage Actor–Critic algorithm (A3C) to ensure rapid convergence and stable training within continuous action and observation spaces. The experimental validation, involving a collaborative peg-in-hole assembly and the polishing of a wooden plate, demonstrates the effectiveness of the proposed solution in terms of its automatic adaptability, responsiveness, flexibility, and safety. •Attainment of multiple tasks in robot control using Quadratic Programming.•Usage of a Reinforcement Learning strategy for online adaptation of task priorities.•Implementation of the Asynchronous Advantage Actor–Critic algorithm.•Demonstration of the stability of the developed controller.•Validation on a Franka through collaborative peg-in-hole and polishing tasks.
ISSN:0736-5845
1879-2537
DOI:10.1016/j.rcim.2024.102857