Premium control with reinforcement learning

We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimens...

Full description

Saved in:
Bibliographic Details
Published in:ASTIN Bulletin : The Journal of the IAA Vol. 53; no. 2; pp. 233 - 257
Main Authors: Palmborg, Lina, Lindskog, Filip
Format: Journal Article
Language:English
Published: New York, USA Cambridge University Press 01.05.2023
Subjects:
ISSN:0515-0361, 1783-1350, 1783-1350
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0515-0361
1783-1350
1783-1350
DOI:10.1017/asb.2023.13