Topology optimization under uncertainty using a stochastic gradient-based approach

Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and thei...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Structural and multidisciplinary optimization Ročník 62; číslo 5; s. 2255 - 2278
Hlavní autori: De, Subhayan, Hampton, Jerrad, Maute, Kurt, Doostan, Alireza
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Berlin/Heidelberg Springer Berlin Heidelberg 01.11.2020
Springer Nature B.V
Predmet:
ISSN:1615-147X, 1615-1488
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g., via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to ensure the accuracy of these gradients. The optimization procedure itself often requires a large number of iterations, which may render TOuU computationally expensive, if not infeasible. To tackle this difficulty, we here propose an optimization approach that generates a stochastic approximation of the objective, constraints, and their gradients via a small number of adjoint (and/or forward) solves, per optimization iteration. A statistically independent (stochastic) approximation of these quantities is generated at each optimization iteration. The total cost of this approach is only a small factor larger than that of the corresponding deterministic topology optimization problem. We incorporate the stochastic approximation of objective, constraints, and their design sensitivities into two classes of optimization algorithms. First, we investigate the stochastic gradient descent (SGD) method and a number of its variants, which have been successfully applied to large-scale optimization problems for machine learning. Second, we study the use of the proposed stochastic approximation approach within conventional nonlinear programming methods, focusing on the globally convergent method of moving asymptotes (GCMMA). The performance of these algorithms is investigated with structural design optimization problems utilizing a solid isotropic material with penalization (SIMP), as well as an explicit level set method. These investigations, conducted on both two- and three-dimensional structures, illustrate the efficacy of the proposed stochastic gradient approach for TOuU applications.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1615-147X
1615-1488
DOI:10.1007/s00158-020-02599-z