MRCDRL: Multi-robot coordination with deep reinforcement learning

•Proposed a novel approach for multi-robot coordination.•Our approach can solve the resource competition problem.•Our approach can solve the obstacle avoidance problems in real time.•Results indicate that our method effectively applied to multi-robot coordination. This paper proposes a multi-robot c...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 406; pp. 68 - 76
Main Authors: Wang, Di, Deng, Hongbin, Pan, Zhenhua
Format: Journal Article
Language:English
Published: Elsevier B.V 17.09.2020
Subjects:
ISSN:0925-2312, 1872-8286
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Proposed a novel approach for multi-robot coordination.•Our approach can solve the resource competition problem.•Our approach can solve the obstacle avoidance problems in real time.•Results indicate that our method effectively applied to multi-robot coordination. This paper proposes a multi-robot cooperative algorithm based on deep reinforcement learning (MRCDRL). We use end-to-end methods to train directly from each robot-centered, relative perspective-generated image, and each robot’s reward as the input. During training, it is not necessary to specify the target position and movement path of each robot. MRCDRL learns the actions of each robot by training the neural network. MRCDRL uses the neural network structure that was modified from the Duel neural network structure. In the Duel network structure, there are two streams that each represents the state value function and the state-dependent action advantage function, and the results of the two streams are merged. The proposed method can solve the resource competition problem on the one hand and can solve the static and dynamic obstacle avoidance problems between multi-robot in real time on the other hand. Our new MRCDRL algorithm has higher accuracy and robustness than DQN and DDQN and can be effectively applied to multi-robot collaboration.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2020.04.028