Bandit algorithms for policy learning: methods, implementation, and welfare-performance

Static supervised learning—in which experimental data serves as a training sample for the estimation of an optimal treatment assignment policy—is a commonly assumed framework of policy learning. An arguably more realistic but challenging scenario is a dynamic setting in which the planner performs ex...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Japanese economic review (Oxford, England) Ročník 75; číslo 3; s. 407 - 447
Hlavní autoři: Kitagawa, Toru, Rowley, Jeff
Médium: Journal Article
Jazyk:angličtina
Vydáno: Singapore Springer Nature Singapore 01.07.2024
Springer Nature B.V
Témata:
ISSN:1352-4739, 1468-5876
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Static supervised learning—in which experimental data serves as a training sample for the estimation of an optimal treatment assignment policy—is a commonly assumed framework of policy learning. An arguably more realistic but challenging scenario is a dynamic setting in which the planner performs experimentation and exploitation simultaneously with subjects that arrive sequentially. This paper studies bandit algorithms for learning an optimal individualised treatment assignment policy. Specifically, we study applicability of the EXP4.P (Exponential weighting for Exploration and Exploitation with Experts) algorithm developed by Beygelzimer et al. (Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, pp 19–26, 2011) to policy learning. Assuming that the class of policies has a finite Vapnik–Chervonenkis dimension and that the number of subjects to be allocated is known, we present a high probability welfare-regret bound of the algorithm. To implement the algorithm, we use an incremental enumeration algorithm for hyperplane arrangements. We perform extensive numerical analysis to assess the algorithm’s sensitivity to its tuning parameters and its welfare-regret performance. Further simulation exercises are calibrated to the National Job Training Partnership Act (JTPA) Study sample to determine how the algorithm performs when applied to economic data. Our findings highlight various computational challenges and suggest that the limited welfare gain from the algorithm is due to substantial heterogeneity in causal effects in the JTPA data.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1352-4739
1468-5876
DOI:10.1007/s42973-024-00165-6