An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes

We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable po...

Full description

Saved in:
Bibliographic Details
Published in:Journal of optimization theory and applications Vol. 153; no. 3; pp. 688 - 708
Main Authors: Bhatnagar, Shalabh, Lakshmanan, K.
Format: Journal Article
Language:English
Published: Boston Springer US 01.06.2012
Springer Nature B.V
Subjects:
ISSN:0022-3239, 1573-2878
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-2
content type line 23
ISSN:0022-3239
1573-2878
DOI:10.1007/s10957-012-9989-5