Numerical Minimization Methods
Unlike authors of many other books on structural and shape optimization, we discuss basic nonlinear programming algorithms only very briefly. Nonlinear programming is central to operations research and a vast literature is available. The reader not familiar with the subject should consult [DS96], [F...
Saved in:
| Published in: | Introduction to Shape Optimization p. 1 |
|---|---|
| Main Authors: | , |
| Format: | Book Chapter |
| Language: | English |
| Published: |
Society for Industrial and Applied Mathematics (SIAM)
2003
Society for Industrial and Applied Mathematics |
| Series: | Advances in Design and Control |
| Subjects: | |
| ISBN: | 0898715369, 9780898715361 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Unlike authors of many other books on structural and shape optimization, we discuss basic nonlinear programming algorithms only very briefly. Nonlinear programming is central to operations research and a vast literature is available. The reader not familiar with the subject should consult [DS96], [Fle87], [GMW81], and [BSS93], for example.
We will focus on methods that we will use for the numerical realization of the “upper” optimization level in examples presented in Chapters 7 and 8. We do not consider methods, such as preconditioned conjugate gradients, intended for solving very large and sparse quadratic (or almost quadratic) programming problems with very simple constraints (or without constraints) arising from the discretization of state problems.
As we have seen previously, the algebraic form of all discrete sizing and optimal shape design problems leads to a minimization problem of the following type:
min
x
∈
U
ƒ
(
x
)
,
(
P
)
where
ƒ
:
U
→
ℝ
is a continuous function and
U
⊂
ℝ
n
is a nonempty set representing constraints. In this chapter we briefly discuss typical gradient type and global optimization methods based on function evaluations, which will be used for the realization of (ℙ). In the second part of this chapter we shall also mention methods of multiobjective optimization.
4.1 Gradient methods for unconstrained optimization
We start with gradient type methods for unconstrained optimization when
U
=
ℝ
n
.
Suppose that f is once continuously differentiable in ℝn. Then the necessary condition for
x
* to solve (ℙ) is to be a stationary point of f; i.e.,
x
* satisfies the system of n generally nonlinear equations
∇
ƒ
(
x
*
)
=
0
.
4.1
Therefore any minimizer of f in ℝn is a stationary point at the same time. The opposite is true for convex functions. |
|---|---|
| ISBN: | 0898715369 9780898715361 |
| DOI: | 10.1137/1.9780898718690.ch4 |

