Dynamic nonlinear programming-new optimization algorithms via dynamic controllers

This paper proposes some new algorithms for unconstrained optimization problems, which have been obtained by application of control theory called direct gradient descent control. A static optimization problem is solved with a dynamic controller by which the convergence speed can be accelerated to a...

Full description

Saved in:
Bibliographic Details
Published in:IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028) Vol. 3; pp. 509 - 514 vol.3
Main Author: Shimizu, K.
Format: Conference Proceeding
Language:English
Japanese
Published: IEEE 20.01.2003
Subjects:
ISBN:9780780357310, 0780357310
ISSN:1062-922X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposes some new algorithms for unconstrained optimization problems, which have been obtained by application of control theory called direct gradient descent control. A static optimization problem is solved with a dynamic controller by which the convergence speed can be accelerated to a great extent. The main idea is to consider an objective function F(x) and its time derivative /sup dF(x(t))///sub dt/ as a performance criterion of control and to apply a gradient descent method. We then obtain several new optimization algorithms which use the second order derivative (Ilessian) Fxx(x(t)) but not its inverse as the Newton method does. It is confirmed by simulations that the proposed methods possess very excellent convergence property to an optimum. It is also interesting that our methods have a function of finding not a local but a global optimum to some extent.
ISBN:9780780357310
0780357310
ISSN:1062-922X
DOI:10.1109/ICSMC.1999.823260