Undiscounted control policy generation for continuous-valued optimal control by approximate dynamic programming
Uloženo v:
| Název: | Undiscounted control policy generation for continuous-valued optimal control by approximate dynamic programming |
|---|---|
| Autoři: | Lock, Jonathan, 1987, McKelvey, Tomas, 1966 |
| Zdroj: | International Journal of Control UCPADP Matlab implementation. 95(10):2854-2864 |
| Témata: | Approximate dynamic programming, undiscounted infinite-horizon, optimal control, control policy |
| Popis: | We present a numerical method for generating the state-feedback control policy associated with general undiscounted, constant-setpoint, infinite-horizon, nonlinear optimal control problems with continuous state variables. The method is based on approximate dynamic programming, and is closely related to approximate policy iteration. Existing methods typically terminate based on the convergence of the control policy and either require a discounted problem formulation or demand the cost function to lie in a specific subclass of functions. The presented method extends on existing termination criteria by requiring both the control policy and the resulting system state to converge, allowing for use with undiscounted cost functions that are bounded and continuous. This paper defines the numerical method, derives the relevant underlying mathematical properties, and validates the numerical method with representative examples. A MATLAB implementation with the shown examples is freely available. |
| Popis souboru: | electronic |
| Přístupová URL adresa: | https://research.chalmers.se/publication/524715 https://research.chalmers.se/publication/523565 https://research.chalmers.se/publication/524551 https://research.chalmers.se/publication/524715/file/524715_Fulltext.pdf |
| Databáze: | SwePub |
| Abstrakt: | We present a numerical method for generating the state-feedback control policy associated with general undiscounted, constant-setpoint, infinite-horizon, nonlinear optimal control problems with continuous state variables. The method is based on approximate dynamic programming, and is closely related to approximate policy iteration. Existing methods typically terminate based on the convergence of the control policy and either require a discounted problem formulation or demand the cost function to lie in a specific subclass of functions. The presented method extends on existing termination criteria by requiring both the control policy and the resulting system state to converge, allowing for use with undiscounted cost functions that are bounded and continuous. This paper defines the numerical method, derives the relevant underlying mathematical properties, and validates the numerical method with representative examples. A MATLAB implementation with the shown examples is freely available. |
|---|---|
| ISSN: | 00207179 13665820 |
| DOI: | 10.1080/00207179.2021.1939892 |
Full Text Finder
Nájsť tento článok vo Web of Science