Reinforcement learning method for target hunting control of multi‐robot systems with obstacles

Aiming at the target encirclement problem of multi‐robot systems, a target hunting control method based on reinforcement learning is proposed. First, the Markov game modeling for the multi‐robot system is carried out. According to the task of hunting, potential energy models are designed to meet the...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International journal of intelligent systems Ročník 37; číslo 12; s. 11275 - 11298
Hlavní autori: Fan, Zhilin, Yang, Hongyong, Liu, Fei, Liu, Li, Han, Yilin
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York John Wiley & Sons, Inc 01.12.2022
Predmet:
ISSN:0884-8173, 1098-111X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Aiming at the target encirclement problem of multi‐robot systems, a target hunting control method based on reinforcement learning is proposed. First, the Markov game modeling for the multi‐robot system is carried out. According to the task of hunting, potential energy models are designed to meet the requirements of arriving at the desired state and avoiding obstacles. The multi‐robot reinforcement learning algorithm guided by the potential energy models is presented to perform the hunting, where reinforcement learning principles are combined with the model control. Secondly, based on the potential energy models, the target‐tracking hunting strategy and the target‐circumnavigation hunting strategy are established. In the former, the consensus tracking of multi‐robot systems is achieved by designing the velocity potential energy function. And in the latter, virtual circumnavigation points are added to construct the potential energy function, which realizes the desired circumnavigation. Finally, the effectiveness of target hunting control based on the multi‐robot reinforcement learning method is verified by simulation.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0884-8173
1098-111X
DOI:10.1002/int.23042