Effect of Virtual Work Braking on Distributed Multi-robot Reinforcement Learning

Multi-agent reinforcement learning (MARL) is one of the most promising methods for solving the problem of multi-robot control. One approach for MARL is cooperative Q-learning (CoQ), which uses learning state space containing states and actions of all agents. Inspite of the mathematical foundation fo...

Full description

Saved in:
Bibliographic Details
Published in:2013 IEEE International Conference on Systems, Man, and Cybernetics pp. 1987 - 1994
Main Author: Kawano, Hiroshi
Format: Conference Proceeding
Language:English
Japanese
Published: IEEE 01.10.2013
Subjects:
ISSN:1062-922X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-agent reinforcement learning (MARL) is one of the most promising methods for solving the problem of multi-robot control. One approach for MARL is cooperative Q-learning (CoQ), which uses learning state space containing states and actions of all agents. Inspite of the mathematical foundation for learning convergence, CoQ often suffers from a state space explosion caused by the increase in the number of agents. Another approach to MARL is distributed Q-learning (DiQ), in which each agent uses learning state space not containing the states and actions of other agents. The state space for DiQ can easily be kept compact. Therefore, DiQ seems suitable for solving multi-robot control problems. However, there is no mathematical guarantee for learning convergence in DiQ and it is difficult to apply DiQ to a multi-robot control problem in which definite appointments among working robots must be considered for accomplishing a mission. To solve these problems in applying DiQ for multi-robot control, we treat the work operated by robots as a new agent that regulates robots' motion. We assume that the work has braking ability for its motion. The work stops its motion when the robot attempts to push the work in an inappropriate direction. The policy for the work braking is obtained via dynamic programming of a Markov decision process by using a map of the environment and the work's geometry. By virtue of this, DiQ without joint state space shows convergence. Simulation results also show the high performance of the proposed method in learning speed.
ISSN:1062-922X
DOI:10.1109/SMC.2013.341