Differential dynamic programming for finite‐horizon zero‐sum differential games of nonlinear systems

In this article, we present an iterative algorithm based on differential dynamic programming (DDP) for finite‐horizon two‐person zero‐sum differential games. The technique of DDP is used to expand the Hamilton–Jacobi–Isaacs (HJI) partial differential equation into higher‐order differential equations...

Full description

Saved in:
Bibliographic Details
Published in:International journal of robust and nonlinear control Vol. 33; no. 18; pp. 11062 - 11084
Main Authors: Zhang, Bin, Jia, Yingmin, Zhang, Yuqi
Format: Journal Article
Language:English
Published: Bognor Regis Wiley Subscription Services, Inc 01.12.2023
Subjects:
ISSN:1049-8923, 1099-1239
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, we present an iterative algorithm based on differential dynamic programming (DDP) for finite‐horizon two‐person zero‐sum differential games. The technique of DDP is used to expand the Hamilton–Jacobi–Isaacs (HJI) partial differential equation into higher‐order differential equations. Using value function and saddle point approximations, the DDP expansion is transformed into algebraic matrix equation in integral form. Based on the algebraic matrix equation, a DDP iterative algorithm is developed to learn the solution to the differential games. Strict proof is proposed to guarantee the iterative convergences of the value function and saddle point. The new algorithm is fundamentally different from existing results, in the sense that it overcome the technical obstacle to address the time‐varying behavior of HJI partial differential equation. Simulation examples are given to demonstrate the effectiveness of the proposed method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1049-8923
1099-1239
DOI:10.1002/rnc.6932