Dynamic nonlinear programming-new optimization algorithms via dynamic controllers

This paper proposes some new algorithms for unconstrained optimization problems, which have been obtained by application of control theory called direct gradient descent control. A static optimization problem is solved with a dynamic controller by which the convergence speed can be accelerated to a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028) Ročník 3; s. 509 - 514 vol.3
Hlavní autor: Shimizu, K.
Médium: Konferenční příspěvek
Jazyk:angličtina
japonština
Vydáno: IEEE 20.01.2003
Témata:
ISBN:9780780357310, 0780357310
ISSN:1062-922X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper proposes some new algorithms for unconstrained optimization problems, which have been obtained by application of control theory called direct gradient descent control. A static optimization problem is solved with a dynamic controller by which the convergence speed can be accelerated to a great extent. The main idea is to consider an objective function F(x) and its time derivative /sup dF(x(t))///sub dt/ as a performance criterion of control and to apply a gradient descent method. We then obtain several new optimization algorithms which use the second order derivative (Ilessian) Fxx(x(t)) but not its inverse as the Newton method does. It is confirmed by simulations that the proposed methods possess very excellent convergence property to an optimum. It is also interesting that our methods have a function of finding not a local but a global optimum to some extent.
ISBN:9780780357310
0780357310
ISSN:1062-922X
DOI:10.1109/ICSMC.1999.823260