Residual Sarsa algorithm with function approximation

In this work, we proposed an efficient algorithm named the residual Sarsa algorithm with function approximation (FARS) to improve the performance of the traditional Sarsa algorithm, and we use the gradient-descent method to update the function parameter vector. In the learning process, the Bellman r...

Full description

Saved in:
Bibliographic Details
Published in:Cluster computing Vol. 22; no. Suppl 1; pp. 795 - 807
Main Authors: Qiming, Fu, Wen, Hu, Quan, Liu, Heng, Luo, Lingyao, Hu, Jianping, Chen
Format: Journal Article
Language:English
Published: New York Springer US 01.01.2019
Springer Nature B.V
Subjects:
ISSN:1386-7857, 1573-7543
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this work, we proposed an efficient algorithm named the residual Sarsa algorithm with function approximation (FARS) to improve the performance of the traditional Sarsa algorithm, and we use the gradient-descent method to update the function parameter vector. In the learning process, the Bellman residual method is adopted to guarantee the convergence of the algorithm, and a new rule for updating vectors of action-value functions is adopted to solve unstable and slow convergence problems. To accelerate the convergence rate of the algorithm, we introduce a new factor, named the forgotten factor, which can help improve the robustness of the algorithm’s performance. Based on two classical reinforcement learning benchmark problems, the experimental results show that the FARS algorithm has better performance than other related reinforcement learning algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1386-7857
1573-7543
DOI:10.1007/s10586-017-1303-8