Coordination in Large Multiagent Reinforcement Learning Problems

Large distributed systems often require intelligent behavior. Although multiagent reinforcement learning can be applied to such systems, several yet unsolved challenges arise due to the large number of simultaneous learners. Among others, these include exponential growth of state-action spaces and c...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology Ročník 2; s. 236 - 239
Hlavní autori: Kemmerich, T., Buning, H. K.
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.08.2011
Predmet:
ISBN:9781457713736, 145771373X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Large distributed systems often require intelligent behavior. Although multiagent reinforcement learning can be applied to such systems, several yet unsolved challenges arise due to the large number of simultaneous learners. Among others, these include exponential growth of state-action spaces and coordination. In this work, we deal with these two issues. Therefore, we consider a subclass of stochastic games called cooperative sequential stage games. With the help of a stateless distributed learning algorithm we solve the problem of growing state-action spaces. Then, we present six different techniques to coordinate action selection during the learning process. We prove a property of the learning algorithm that helps to reduce computational costs of one technique. An experimental analysis in a distributed agent partitioning problem with hundreds of agents reveals that the proposed techniques can lead to higher quality solutions and increase convergence speed compared to the basic approach. Some techniques even outperform a state-of-the-art special purpose approach.
ISBN:9781457713736
145771373X
DOI:10.1109/WI-IAT.2011.44