Reinforcement learning with dynamic convex risk measures

We develop an approach for solving time‐consistent risk‐sensitive stochastic optimization problems using model‐free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time‐consistent dynamic pr...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Mathematical finance Ročník 34; číslo 2; s. 557 - 587
Hlavní autoři: Coache, Anthony, Jaimungal, Sebastian
Médium: Journal Article
Jazyk:angličtina
Vydáno: Oxford Blackwell Publishing Ltd 01.04.2024
Témata:
ISSN:0960-1627, 1467-9965
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We develop an approach for solving time‐consistent risk‐sensitive stochastic optimization problems using model‐free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time‐consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules that aid in obtaining optimal policies. We further develop an actor–critic style algorithm using neural networks to optimize over policies. Finally, we demonstrate the performance and flexibility of our approach by applying it to three optimization problems: statistical arbitrage trading strategies, financial hedging, and obstacle avoidance robot control.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0960-1627
1467-9965
DOI:10.1111/mafi.12388