An application of deep reinforcement learning to algorithmic trading

•Reinforcement learning (RL) formalization of the algorithmic trading problem.•Novel trading strategy based on deep reinforcement learning (DRL), denominated TDQN.•Rigorous performance assessment methodology for algorithmic trading.•TDQN algorithm delivers promising results surpassing benchmark stra...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Expert systems with applications Ročník 173; s. 114632
Hlavní autoři: Théate, Thibaut, Ernst, Damien
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Elsevier Ltd 01.07.2021
Elsevier BV
Elsevier
Témata:
ISSN:0957-4174, 1873-6793, 1873-6793
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Reinforcement learning (RL) formalization of the algorithmic trading problem.•Novel trading strategy based on deep reinforcement learning (DRL), denominated TDQN.•Rigorous performance assessment methodology for algorithmic trading.•TDQN algorithm delivers promising results surpassing benchmark strategies. This scientific research paper presents an innovative approach based on deep reinforcement learning (DRL) to solve the algorithmic trading problem of determining the optimal trading position at any point in time during a trading activity in the stock market. It proposes a novel DRL trading policy so as to maximise the resulting Sharpe ratio performance indicator on a broad range of stock markets. Denominated the Trading Deep Q-Network algorithm (TDQN), this new DRL approach is inspired from the popular DQN algorithm and significantly adapted to the specific algorithmic trading problem at hand. The training of the resulting reinforcement learning (RL) agent is entirely based on the generation of artificial trajectories from a limited set of stock market historical data. In order to objectively assess the performance of trading strategies, the research paper also proposes a novel, more rigorous performance assessment methodology. Following this new performance assessment approach, promising results are reported for the TDQN algorithm.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
scopus-id:2-s2.0-85101170161
ISSN:0957-4174
1873-6793
1873-6793
DOI:10.1016/j.eswa.2021.114632