Convergence of Successive Approximation Methods with Parameter Target Sets: Convergence of successive approximation methods with parameter target sets.

Gespeichert in:
Bibliographische Detailangaben
Titel: Convergence of Successive Approximation Methods with Parameter Target Sets: Convergence of successive approximation methods with parameter target sets.
Autoren: Levy, A.B.
Quelle: Mathematics of Operations Research. 30:765-784
Verlagsinformationen: Institute for Operations Research and the Management Sciences (INFORMS), 2005.
Publikationsjahr: 2005
Schlagwörter: Numerical methods based on nonlinear programming, 4. Education, 0211 other engineering and technologies, 02 engineering and technology, inclusion solving, variational analysis, constrained optimization, convergence analysis, Methods of successive quadratic programming type, numerical optimization, augmented Lagrangian methods, penalty methods, Numerical mathematical programming methods, generalized continuity, Sensitivity, stability, parametric optimization, barrier methods, trust region methods, sequential quadratic programming
Beschreibung: Successive approximation methods appear throughout numerical optimization, where a solution to an optimization problem is sought as the limit of solutions to a succession of simpler approximation problems. Such methods include essentially any standard penalty method, barrier method, trust region method, augmented Lagrangian method, or sequential quadratic programming (SQP) method, as well as many other methods. The approximation problems on which a successive approximation method is based typically depend on parameters, in which case the performance of the method is related to the corresponding sequence of parameters. For many successive approximation methods, the sequence of parameters might need only approach some parameter target set for the method to have nice convergence properties. Successive approximation methods could be analyzed as examples of a generic inclusion solving method from Levy [23] because the solutions to the approximation problems satisfy necessary optimality inclusions. However, the inclusion solving method from Levy [23] was developed for single-parameter target points. In this paper, we extend the results from Levy [23] to allow parameter target sets and apply these results to the convergence analysis of successive approximation methods. We focus on two important convergence issues: (1) the rate of convergence of the iterates generated by a successive approximation method and (2) the validity of the limit as a solution to the original problem. An augmented Lagrangian method allowing quite general parameter updating is explored in detail to illustrate how the framework presented here can expose interesting new alternatives for numerical optimization.
Publikationsart: Article
Dateibeschreibung: application/xml; application/pdf
Sprache: English
ISSN: 1526-5471
0364-765X
DOI: 10.1287/moor.1050.0153
Zugangs-URL: https://www.jstor.org/stable/pdfplus/25151682.pdf
https://ideas.repec.org/a/inm/ormoor/v30y2005i3p765-784.html
https://dblp.uni-trier.de/db/journals/mor/mor30.html#Levy05
https://econpapers.repec.org/RePEc:inm:ormoor:v:30:y:2005:i:3:p:765-784
https://pubsonline.informs.org/doi/abs/10.1287/moor.1050.0153
Dokumentencode: edsair.doi.dedup.....c62bf8cf3008484adb9fdcc83c02d7dc
Datenbank: OpenAIRE
Beschreibung
Abstract:Successive approximation methods appear throughout numerical optimization, where a solution to an optimization problem is sought as the limit of solutions to a succession of simpler approximation problems. Such methods include essentially any standard penalty method, barrier method, trust region method, augmented Lagrangian method, or sequential quadratic programming (SQP) method, as well as many other methods. The approximation problems on which a successive approximation method is based typically depend on parameters, in which case the performance of the method is related to the corresponding sequence of parameters. For many successive approximation methods, the sequence of parameters might need only approach some parameter target set for the method to have nice convergence properties. Successive approximation methods could be analyzed as examples of a generic inclusion solving method from Levy [23] because the solutions to the approximation problems satisfy necessary optimality inclusions. However, the inclusion solving method from Levy [23] was developed for single-parameter target points. In this paper, we extend the results from Levy [23] to allow parameter target sets and apply these results to the convergence analysis of successive approximation methods. We focus on two important convergence issues: (1) the rate of convergence of the iterates generated by a successive approximation method and (2) the validity of the limit as a solution to the original problem. An augmented Lagrangian method allowing quite general parameter updating is explored in detail to illustrate how the framework presented here can expose interesting new alternatives for numerical optimization.
ISSN:15265471
0364765X
DOI:10.1287/moor.1050.0153