Premium control with reinforcement learning

We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimens...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:ASTIN Bulletin : The Journal of the IAA Ročník 53; číslo 2; s. 233 - 257
Hlavní autoři: Palmborg, Lina, Lindskog, Filip
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York, USA Cambridge University Press 01.05.2023
Témata:
ISSN:0515-0361, 1783-1350, 1783-1350
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0515-0361
1783-1350
1783-1350
DOI:10.1017/asb.2023.13