An application of deep reinforcement learning to algorithmic trading

•Reinforcement learning (RL) formalization of the algorithmic trading problem.•Novel trading strategy based on deep reinforcement learning (DRL), denominated TDQN.•Rigorous performance assessment methodology for algorithmic trading.•TDQN algorithm delivers promising results surpassing benchmark stra...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 173; p. 114632
Main Authors: Théate, Thibaut, Ernst, Damien
Format: Journal Article
Language:English
Published: New York Elsevier Ltd 01.07.2021
Elsevier BV
Elsevier
Subjects:
ISSN:0957-4174, 1873-6793, 1873-6793
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Reinforcement learning (RL) formalization of the algorithmic trading problem.•Novel trading strategy based on deep reinforcement learning (DRL), denominated TDQN.•Rigorous performance assessment methodology for algorithmic trading.•TDQN algorithm delivers promising results surpassing benchmark strategies. This scientific research paper presents an innovative approach based on deep reinforcement learning (DRL) to solve the algorithmic trading problem of determining the optimal trading position at any point in time during a trading activity in the stock market. It proposes a novel DRL trading policy so as to maximise the resulting Sharpe ratio performance indicator on a broad range of stock markets. Denominated the Trading Deep Q-Network algorithm (TDQN), this new DRL approach is inspired from the popular DQN algorithm and significantly adapted to the specific algorithmic trading problem at hand. The training of the resulting reinforcement learning (RL) agent is entirely based on the generation of artificial trajectories from a limited set of stock market historical data. In order to objectively assess the performance of trading strategies, the research paper also proposes a novel, more rigorous performance assessment methodology. Following this new performance assessment approach, promising results are reported for the TDQN algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
scopus-id:2-s2.0-85101170161
ISSN:0957-4174
1873-6793
1873-6793
DOI:10.1016/j.eswa.2021.114632