Coordination in Large Multiagent Reinforcement Learning Problems

Large distributed systems often require intelligent behavior. Although multiagent reinforcement learning can be applied to such systems, several yet unsolved challenges arise due to the large number of simultaneous learners. Among others, these include exponential growth of state-action spaces and c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology Jg. 2; S. 236 - 239
Hauptverfasser: Kemmerich, T., Buning, H. K.
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.08.2011
Schlagworte:
ISBN:9781457713736, 145771373X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Large distributed systems often require intelligent behavior. Although multiagent reinforcement learning can be applied to such systems, several yet unsolved challenges arise due to the large number of simultaneous learners. Among others, these include exponential growth of state-action spaces and coordination. In this work, we deal with these two issues. Therefore, we consider a subclass of stochastic games called cooperative sequential stage games. With the help of a stateless distributed learning algorithm we solve the problem of growing state-action spaces. Then, we present six different techniques to coordinate action selection during the learning process. We prove a property of the learning algorithm that helps to reduce computational costs of one technique. An experimental analysis in a distributed agent partitioning problem with hundreds of agents reveals that the proposed techniques can lead to higher quality solutions and increase convergence speed compared to the basic approach. Some techniques even outperform a state-of-the-art special purpose approach.
ISBN:9781457713736
145771373X
DOI:10.1109/WI-IAT.2011.44