Premium control with reinforcement learning

We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimens...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:ASTIN Bulletin : The Journal of the IAA Ročník 53; číslo 2; s. 233 - 257
Hlavní autori: Palmborg, Lina, Lindskog, Filip
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York, USA Cambridge University Press 01.05.2023
Predmet:
ISSN:0515-0361, 1783-1350, 1783-1350
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0515-0361
1783-1350
1783-1350
DOI:10.1017/asb.2023.13