Newton’s Method, Bellman Recursion and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games

Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited...

Full description

Saved in:
Bibliographic Details
Published in:Dynamic games and applications Vol. 12; no. 2; pp. 394 - 442
Main Authors: Di, Bolei, Lamperski, Andrew
Format: Journal Article
Language:English
Published: New York Springer US 01.06.2022
Springer Nature B.V
Subjects:
ISSN:2153-0785, 2153-0793
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend the Newton step algorithm, the Bellman recursion and the popular differential dynamic programming (DDP) for single-agent optimal control to the case of full-information nonzero sum dynamic games. We show that the Newton’s step can be solved in a computationally efficient manner and inherits its original quadratic convergence rate to open-loop Nash equilibria, and that the approximated Bellman recursion and DDP methods are very similar and can be used to find local feedback O ( ε 2 ) -Nash equilibria. Numerical examples are provided.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2153-0785
2153-0793
DOI:10.1007/s13235-021-00399-8