Safe multi-agent reinforcement learning for multi-robot control

A challenging problem in robotics is how to control multiple robots cooperatively and safely in real-world applications. Yet, developing multi-robot control methods from the perspective of safe multi-agent reinforcement learning (MARL) has merely been studied. To fill this gap, in this study, we inv...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Artificial intelligence Ročník 319; s. 103905
Hlavní autoři: Gu, Shangding, Grudzien Kuba, Jakub, Chen, Yuanpei, Du, Yali, Yang, Long, Knoll, Alois, Yang, Yaodong
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.06.2023
Témata:
ISSN:0004-3702, 1872-7921
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:A challenging problem in robotics is how to control multiple robots cooperatively and safely in real-world applications. Yet, developing multi-robot control methods from the perspective of safe multi-agent reinforcement learning (MARL) has merely been studied. To fill this gap, in this study, we investigate safe MARL for multi-robot control on cooperative tasks, in which each individual robot has to not only meet its own safety constraints while maximising their reward, but also consider those of others to guarantee safe team behaviours. Firstly, we formulate the safe MARL problem as a constrained Markov game and employ policy optimisation to solve it theoretically. The proposed algorithm guarantees monotonic improvement in reward and satisfaction of safety constraints at every iteration. Secondly, as approximations to the theoretical solution, we propose two safe multi-agent policy gradient methods: Multi-Agent Constrained Policy Optimisation (MACPO) and MAPPO-Lagrangian. Thirdly, we develop the first three safe MARL benchmarks—Safe Multi-Agent MuJoCo (Safe MAMuJoCo), Safe Multi-Agent Robosuite (Safe MARobosuite) and Safe Multi-Agent Isaac Gym (Safe MAIG) to expand the toolkit of MARL and robot control research communities. Finally, experimental results on the three safe MARL benchmarks indicate that our methods can achieve state-of-the-art performance in the balance between improving reward and satisfying safety constraints compared with strong baselines. Demos and code are available at the link (https://sites.google.com/view/aij-safe-marl/).2 •The problem of safe multi-agent reinforcement learning is formulated.•Multi-agent constrained policy optimisation (MACPO) method is proposed.•MACPO ensures both safety constraints satisfaction and monotonic performance improvement guarantee.•Three safe MARL benchmarks are developed: Safe Multi-Agent MuJoCo (Safe MAMuJoCo), Safe Multi-Agent Robosuite (Safe MARobosuite) and Safe Multi-Agent Isaac Gym (Safe MAIG).•Experiments on multiple benchmark environments confirm the effectiveness of MACPO and MAPPO-Lagrangian.
ISSN:0004-3702
1872-7921
DOI:10.1016/j.artint.2023.103905