Adaptive Multi-Step Evaluation Design With Stability Guarantee for Discrete-Time Optimal Learning Control
This paper is concerned with a novel integrated multi-step heuristic dynamic programming (MsHDP) algorithm for solving optimal control problems. It is shown that, initialized by the zero cost function, MsHDP can converge to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. Then, th...
Saved in:
| Published in: | IEEE/CAA journal of automatica sinica Vol. 10; no. 9; pp. 1797 - 1809 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Piscataway
Chinese Association of Automation (CAA)
01.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Faculty of Information Technology,the Beijing Key Laboratory of Computational Intelligence and Intelligent System,the Beijing Laboratory of Smart Environmental Protection,and the Beijing Institute of Artificial Intelligence,Beijing University of Technology,Beijing 100124,China |
| Subjects: | |
| ISSN: | 2329-9266, 2329-9274 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This paper is concerned with a novel integrated multi-step heuristic dynamic programming (MsHDP) algorithm for solving optimal control problems. It is shown that, initialized by the zero cost function, MsHDP can converge to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. Then, the stability of the system is analyzed using control policies generated by MsHDP.Also, a general stability criterion is designed to determine the admissibility of the current control policy. That is, the criterion is applicable not only to traditional value iteration and policy iteration but also to MsHDP. Further, based on the convergence and the stability criterion, the integrated MsHDP algorithm using immature control policies is developed to accelerate learning efficiency greatly. Besides, actor-critic is utilized to implement the integrated MsHDP scheme, where neural networks are used to evaluate and improve the iterative policy as the parameter architecture. Finally, two simulation examples are given to demonstrate that the learning effectiveness of the integrated MsHDP scheme surpasses those of other fixed or integrated methods. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2329-9266 2329-9274 |
| DOI: | 10.1109/JAS.2023.123684 |