Cooperative multi-robot reinforcement learning: A framework in hybrid state space

In the area of autonomous multi-robot cooperation, much emphasis has been placed on how to coordinate individual robot behaviors in order to achieve an optimal solution to task completion as a team. This paper presents an approach to cooperative multi-robot reinforcement learning based on a hybrid s...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2009 IEEE/RSJ International Conference on Intelligent Robots and Systems s. 1190 - 1196
Hlavní autoři: Xueqing Sun, Tao Mao, Kralik, J.D., Ray, L.E.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.10.2009
Témata:
ISBN:9781424438037, 1424438039
ISSN:2153-0858
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In the area of autonomous multi-robot cooperation, much emphasis has been placed on how to coordinate individual robot behaviors in order to achieve an optimal solution to task completion as a team. This paper presents an approach to cooperative multi-robot reinforcement learning based on a hybrid state space representation of the environment to achieve both task learning and heterogeneous role emergence in a unified framework. The methodology also involves learning space reduction through a neural perception module and a progressive rescheduling algorithm that interleaves online execution and relearning to adapt to environmental uncertainties and enhance performance. The approach aims to reduce combinatorial complexity inherent in role-task optimization, and achieves a satisfying solution to complex team-based tasks, rather than a globally optimal solution. Empirical evaluation of the proposed framework is conducted through simulation of a foraging task.
ISBN:9781424438037
1424438039
ISSN:2153-0858
DOI:10.1109/IROS.2009.5354406