A novel individually rational objective in multi-agent multi-armed bandits: Algorithms and regret bounds

We study a two-player stochastic multi-armed bandit (MAB) problem with different expected rewards for each player, a generalisation of two-player general sum repeated games to stochastic rewards. Our aim is to find the egalitarian bargaining solution (EBS) for the repeated game, which can lead to mu...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems Ročník 2020-May; s. 1395
Hlavní autori: Tossou, Aristide, Dimitrakakis, Christos, Rzepecki, Jaroslaw, Hofmann, K.
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: 2020
Predmet:
ISSN:1558-2914, 1548-8403
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We study a two-player stochastic multi-armed bandit (MAB) problem with different expected rewards for each player, a generalisation of two-player general sum repeated games to stochastic rewards. Our aim is to find the egalitarian bargaining solution (EBS) for the repeated game, which can lead to much higher rewards than the maximin value of both players. Our main contribution is the derivation of an algorithm, UCRG, that achieves simultaneously for both players, a high-probability regret bound of order Õ ( T2/3) after any T rounds of play. We demonstrate that our upper bound is nearly optimal by proving a lower bound of Ω ( T2/3) for any algorithm. Experiments confirm our theoretical results and the superiority of UCRG compared to the well-known explore-then-commit heuristic.
ISSN:1558-2914
1548-8403
DOI:10.5555/3398761.3398922